Search Results

Search found 13693 results on 548 pages for 'python metaprogramming'.

Page 124/548 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Python: Copying files with special characters in path

    - by erikderwikinger
    Hi is there any possibility in Python 2.5 to copy files having special chars (Japanese chars, cyrillic letters) in their path? shutil.copy cannot handle this. here is some example code: import copy, os,shutil,sys fname=os.getenv("USERPROFILE")+"\\Desktop\\testfile.txt" print fname print "type of fname: "+str(type(fname)) fname0 = unicode(fname,'mbcs') print fname0 print "type of fname0: "+str(type(fname0)) fname1 = unicodedata.normalize('NFKD', fname0).encode('cp1251','replace') print fname1 print "type of fname1: "+str(type(fname1)) fname2 = unicode(fname,'mbcs').encode(sys.stdout.encoding) print fname2 print "type of fname2: "+str(type(fname2)) shutil.copy(fname2,'C:\\') the output on a Russian Windows XP C:\Documents and Settings\+????????????\Desktop\testfile.txt type of fname: <type 'str'> C:\Documents and Settings\?????????????\Desktop\testfile.txt type of fname0: <type 'unicode'> C:\Documents and Settings\+????????????\Desktop\testfile.txt type of fname1: <type 'str'> C:\Documents and Settings\?????????????\Desktop\testfile.txt type of fname2: <type 'str'> Traceback (most recent call last): File "C:\Test\getuserdir.py", line 23, in <module> shutil.copy(fname2,'C:\\') File "C:\Python25\lib\shutil.py", line 80, in copy copyfile(src, dst) File "C:\Python25\lib\shutil.py", line 46, in copyfile fsrc = open(src, 'rb') IOError: [Errno 2] No such file or directory: 'C:\\Documents and Settings\\\x80\ xa4\xac\xa8\xad\xa8\xe1\xe2\xe0\xa0\xe2\xae\xe0\\Desktop\\testfile.txt'

    Read the article

  • Python: two loops at once

    - by Stephan Meijer
    I've got a problem: I am new to Python and I want to do multiple loops. I want to run a WebSocket client (Autobahn) and I want to run a loop which shows the filed which are edited in a specific folder (pyinotify or else Watchdog). Both are running forever, Great. Is there a way to run them at once and send a message via the WebSocket connection while I'm running the FileSystemWatcher, like with callbacks, multithreading, multiprocessing or just separate files? factory = WebSocketClientFactory("ws://localhost:8888/ws", debug=False) factory.protocol = self.webSocket connectWS(factory) reactor.run() If we run this, it will have success. But if we run this: factory = WebSocketClientFactory("ws://localhost:8888/ws", debug=False) factory.protocol = self.webSocket connectWS(factory) reactor.run() # Websocket client running now,running the filewatcher wm = pyinotify.WatchManager() mask = pyinotify.IN_DELETE | pyinotify.IN_CREATE # watched events class EventHandler(pyinotify.ProcessEvent): def process_IN_CREATE(self, event): print "Creating:", event.pathname def process_IN_DELETE(self, event): print "Removing:", event.pathname handler = EventHandler() notifier = pyinotify.Notifier(wm, handler) wdd = wm.add_watch('/tmp', mask, rec=True) notifier.loop() This will create 2 loops, but since we already have a loop, the code after 'reactor.run()' will not run at all.. For your information: this project is going to be a sync client. Thanks a lot!

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • Python: Serial Transmission

    - by Silent Elektron
    I have an image stack of 500 images (jpeg) of 640x480. I intend to make 500 pixels (1st pixels of all images) as a list and then send that via COM1 to FPGA where I do my further processing. I have a couple of questions here: How do I import all the 500 images at a time into python and how do i store it? How do I send the 500 pixel list via COM1 to FPGA? I tried the following: Converted the jpeg image to intensity values (each pixel is denoted by a number between 0 and 255) in MATLAB, saved the intensity values in a text file, read that file using readlines(). But it became too cumbersome to make the intensity value files for all the 500 images! Used NumPy to put the read files in a matrix and then pick the first pixel of all images. But when I send it, its coming like: [56, 61, 78, ... ,71, 91]. Is there a way to eliminate the [ ] and , while sending the data serially? Thanks in Advance! :)

    Read the article

  • Calculating a total cost in Python

    - by Sérgio Lourenço
    I'm trying to create a trip planner in python, but after I defined all the functions I'm not able to call and calculate them in the last function tripCost(). In tripCost, I want to put the days and travel destination (city) and the program runs the functions and gives me the exact result of all the 3 functions previously defined. Code: def hotelCost(): days = raw_input ("How many nights will you stay at the hotel?") total = 140 * int(days) print "The total cost is",total,"dollars" def planeRideCost(): city = raw_input ("Wich city will you travel to\n") if city == 'Charlotte': return "The cost is 183$" elif city == 'Tampa': return "The cost is 220$" elif city == 'Pittsburgh': return "The cost is 222$" elif city == 'Los Angeles': return "The cost is 475$" else: return "That's not a valid destination" def rentalCarCost(): rental_days = raw_input ("How many days will you rent the car\n") discount_3 = 40 * int(rental_days) * 0.2 discount_7 = 40 * int(rental_days) * 0.5 total_rent3 = 40 * int(rental_days) - discount_3 total_rent7 = 40 * int(rental_days) - discount_7 cost_day = 40 * int(rental_days) if int(rental_days) >= 3: print "The total cost is", total_rent3, "dollars" elif int(rental_days) >= 7: print "The total cost is", total_rent7, "dollars" else: print "The total cost is", cost_day, "dollars" def tripCost(): travel_city = raw_input ("What's our destination\n") days_travel = raw_input ("\nHow many days will you stay\n") total_trip_cost = hotelCost(int(day_travel)) + planeRideCost (str(travel_city)) + rentalCost (int(days_travel)) return "The total cost with the trip is", total_trip_cost tripCost()

    Read the article

  • Python Multiprocessing Question

    - by Avan
    My program has 2 parts divided into the core and downloader. The core handles all the app logic while the downloader just downloads urls. Right now, I am trying to use the python multiprocessing module to accomplish the task of the core as a process and the downloader as a process. The first problem I noticed was that if I spawn the downloader process from the core process so that the downloader is the child process and the core is the parent, the core process(parent) is blocked until the child is finished. I do not want this behaivor though. I would like to have a core process and a downloader process that are both able to execute their code and communicate between each other. example ... def main(): jobQueue = Queue() jobQueue.put("http://google.com) d = Downloader(jobQueue) p = Process(target=d.start()) p.start() if __name__ == '__main__': freeze_support() main() where Downloader's start() just takes the url out of the queue and downloads it. In order to have the 2 processes unblocked, would I need to create 2 processes from the parent process and then share something between them?

    Read the article

  • Python: Errors saving and loading objects with pickle module

    - by Peterstone
    Hi, I am trying to load and save objects with this piece of code I get it from a question I asked a week ago: Python: saving and loading objects and using pickle. The piece of code is this: class Fruits: pass banana = Fruits() banana.color = 'yellow' banana.value = 30 import pickle filehandler = open("Fruits.obj","wb") pickle.dump(banana,filehandler) filehandler.close() file = open("Fruits.obj",'rb') object_file = pickle.load(file) file.close() print(object_file.color, object_file.value, sep=', ') At a first glance the piece of code works well, getting load and see the 'color' and 'value' of the saved object. But, what I pursuit is to close a session, open a new one and load what I save in a past session. I close the session after putting the line filehandler.close() and I open a new one and I put the rest of your code, then after putting object_file = pickle.load(file) I get this error: Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> object_file = pickle.load(file) File "C:\Python31\lib\pickle.py", line 1365, in load encoding=encoding, errors=errors).load() AttributeError: 'module' object has no attribute 'Fruits' Can anyone explain me what this error message means and telling me how to solve this problem? Thank so much and happy new year!!

    Read the article

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • How do I override a python import?

    - by Evan Plaice
    So I'm working on pypreprocessor which is a preprocessor that takes c-style directives and I've been able to make it work like a traditional preprocessor (it's self-consuming and executes postprocessed code on-the-fly) except that it breaks library imports. The problem is. The preprocessor runs through the file, processes' it, outputs to a temp file, and exec() the temp file. Libraries that are imported need to be handled a little different because they aren't executed but rather loaded and made accessible to the caller module. What I need to be able to do is. Interrupt the import (since the preprocessor is being run in the middle of the import), load the postprocessed code as a tempModule, and replace the original import with the tempModule to trick the calling script with the import into believing that the tempModule is the original module. I have searched everywhere and so far, have no solution. This question is the closest I've seen so far to providing an answer: http://stackoverflow.com/questions/1096216/override-namespace-in-python Here's what I have. # remove the bytecode file created by the first import os.remove(moduleName + '.pyc') # remove the first import del sys.modules[moduleName] # import the postprocessed module tmpModule = __import__(tmpModuleName) # set first module's reference to point to the preprocessed module sys.modules[moduleName] = tmpModule moduleName is the name of the original module, tmpModuleName is the name of the postprocessed code file. The strange part is, this solution still runs completely normal as if the first module completed loaded normally; unless you remove the last line, then you get a module not found error. Hopefully someone on SO know a lot more about imports than I do because this one has me stumped.

    Read the article

  • Creating subtree from tree which is represented in xml - python

    - by Jay
    Hi I have an XML (in the form of tree), I require to create sub-tree out of it. For ex: <a> <b> <c>Hello</c> <d> <e>Hi</e> </a> Subtree would be <root> <a> <b> <c>Hello</c> </b> </a> <a> <d> <e>Hi</e> </d> </a> </root> What is the best XML library in python to do it? Any algorithm that already does this would also be helpful. Note: the XML doc won't be that big, it will easily fit in memory.

    Read the article

  • Python 2.7.3 memory error

    - by Tom Baker
    I have a specific case with python code. Every time I run the code, the RAM memory is increasing until it reaches 1.8 gb and crashes. import itertools import csv import pokersleuth cards = ['2s', '3s', '4s', '5s', '6s', '7s', '8s', '9s', 'Ts', 'Js', 'Qs', 'Ks', 'As', '2h', '3h', '4h', '5h', '6h', '7h', '8h', '9h', 'Th', 'Jh', 'Qh', 'Kh', 'Ah', '2c', '3c', '4c', '5c', '6c', '7c', '8c', '9c', 'Tc', 'Jc', 'Qc', 'Kc', 'Ac', '2d', '3d', '4d', '5d', '6d', '7d', '8d', '9d', 'Td', 'Jd', 'Qd', 'Kd', 'Ad'] flop = itertools.combinations(cards,3) a1 = 'Ks' ; a2 = 'Qs' b1 = 'Jc' ; b2 = 'Jd' cards1 = a1+a2 cards2 = b1+b2 number = 0 n=0 m=0 for row1 in flop: if (row1[0] <> a1 and row1[0] <>a2 and row1[0] <>b1 and row1[0] <>b2) and (row1[1] <> a1 and row1[1] <>a2 and row1[1] <>b1 and row1[1] <>b2) and (row1[2] <> a1 and row1[2] <> a2 and row1[2] <> b1 and row1[2] <> b2): for row2 in cards: if (row2 <> a1 and row2 <> a2 and row2 <> b1 and row2 <> b2 and row2 <> row1[0] and row2 <> row1[1] and row2 <> row1[2]): s = pokersleuth.compute_equity(row1[0]+row1[1]+row1[2]+row2, (cards1, cards2)) if s[0]>=0.5: number +=1 del s[:] del s[:] print number/45.0 number = 0 n+=1

    Read the article

  • Efficient way to maintain a sorted list of access counts in Python

    - by David
    Let's say I have a list of objects. (All together now: "I have a list of objects.") In the web application I'm writing, each time a request comes in, I pick out up to one of these objects according to unspecified criteria and use it to handle the request. Basically like this: def handle_request(req): for h in handlers: if h.handles(req): return h return None Assuming the order of the objects in the list is unimportant, I can cut down on unnecessary iterations by keeping the list sorted such that the most frequently used (or perhaps most recently used) objects are at the front. I know this isn't something to be concerned about - it'll make only a miniscule, undetectable difference in the app's execution time - but debugging the rest of the code is driving me crazy and I need a distraction :) so I'm asking out of curiosity: what is the most efficient way to maintain the list in sorted order, descending, by the number of times each handler is chosen? The obvious solution is to make handlers a list of (count, handler) pairs, and each time a handler is chosen, increment the count and resort the list. def handle_request(req): for h in handlers[:]: if h[1].handles(req): h[0] += 1 handlers.sort(reverse=True) return h[1] return None But since there's only ever going to be at most one element out of order, and I know which one it is, it seems like some sort of optimization should be possible. Is there something in the standard library, perhaps, that is especially well-suited to this task? Or some other data structure? (Even if it's not implemented in Python) Or should/could I be doing something completely different?

    Read the article

  • Unable to retrieve search results from server side : Facebook Graph API usig Python

    - by DjangoRocks
    Hi all, I'm doing some simple Python + FB Graph training on my own, and I faced a weird problem: import time import sys import urllib2 import urllib from json import loads base_url = "https://graph.facebook.com/search?q=" post_id = None post_type = None user_id = None message = None created_time = None def doit(hour): page = 1 search_term = "\"Plastic Planet\"" encoded_search_term = urllib.quote(search_term) print encoded_search_term type="&type=post" url = "%s%s%s" % (base_url,encoded_search_term,type) print url while(1): try: response = urllib2.urlopen(url) except urllib2.HTTPError, e: print e finally: pass content = response.read() content = loads(content) print "==================================" for c in content["data"]: print c print "****************************************" try: content["paging"] print "current URL" print url print "next page!------------" url = content["paging"]["next"] print url except: pass finally: pass """ print "new URL is =======================" print url print "==================================" """ print url What I'm trying to do here is to automatically page through the search results, but trying for content["paging"]["next"] But the weird thing is that no data is returned; i received the following: {"data":[]} Even in the very first loop. But when i copied the URL into a browser, a lot of results were returned. I've also tried a version with my access token and th same thing happens. Can anyone enlighten me? +++++++++++++++++++EDITED and SIMPLIFIED++++++++++++++++++ ok thanks to TryPyPy, here's the simplified and edited version of my previous question: Why is that: import urllib2 url = "https://graph.facebook.com/searchq=%22Plastic+Planet%22&type=post&limit=25&until=2010-12-29T19%3A54%3A56%2B0000" response = urllib2.urlopen(url) print response.read() result in {"data":[]} ? But the same url produces a lot of data in a browser? Anyone? Best Regards.

    Read the article

  • Why can't I handle a KeyboardInterrupt in python?

    - by Josh
    I'm writing python 2.6.6 code on windows that looks like this: try: dostuff() except KeyboardInterrupt: print "Interrupted!" except: print "Some other exception?" finally: print "cleaning up...." print "done." dostuff() is a function that loops forever, reading a line at a time from an input stream and acting on it. I want to be able to stop it and clean up when I hit ctrl-c. What's happening instead is that the code under except KeyboardInterrupt: isn't running at all. The only thing that gets printed is "cleaning up...", and then a traceback is printed that looks like this: Traceback (most recent call last): File "filename.py", line 119, in <module> print 'cleaning up...' KeyboardInterrupt So, exception handling code is NOT running, and the traceback claims that a KeyboardInterrupt occurred during the finally clause, which doesn't make sense because hitting ctrl-c is what caused that part to run in the first place! Even the generic except: clause isn't running. EDIT: Based on the comments, I replaced the contents of the try: block with sys.stdin.read(). The problem still occurs exactly as described, with the first line of the finally: block running and then printing the same traceback.

    Read the article

  • Very simple python functions takes spends long time in function and not subfunctions

    - by John Salvatier
    I have spent many hours trying to figure what is going on here. The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening? Have I done something obviously inefficient? Am I misunderstanding how cProfile works? def grad_logp(self, variable, calculation_set ): p = params(self.p,self.parents) return self.n(variable, self.p) def n (self, variable, p ): gradient = self.gg(variable, p) return np.reshape(gradient, np.shape(variable.value)) def gg(self, variable, p): if variable is self: gradient = self._grad_logps['x']( x = self.value, **p) else: gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()]) return gradient

    Read the article

  • Dictionaries with more than one key per value in Python

    - by nickname
    I am attempting to create a nice interface to access a data set where each value has several possible keys. For example, suppose that I have both a number and a name for each value in the data set. I want to be able to access each value using either the number OR the name. I have considered several possible implementations: Using two separate dictionaries, one for the data values organized by number, and one for the data values organized by name. Simply assigning two keys to the same value in a dictionary. Creating dictionaries mapping each name to the corresponding number, and vice versa Attempting to create a hash function that maps each name to a number, etc. (related to the above) Creating an object to encapsulate all three pieces of data, then using one key to map dictionary keys to the objects and simply searching the dictionary to map the other key to the object. None of these seem ideal. The first seems ugly and unmaintainable. The second also seems fragile. The third/fourth seem plausible, but seem to require either much manual specification or an overly complex implementation. Finally, the fifth loses constant-time performance for one of the lookups. In C/C++, I believe that I would use pointers to reference the same piece of data from different keys. I know that the problem is rather similar to a database lookup problem by a non-key column, however, I would like (if possible), to maintain the approximate O(1) performance of Python dictionaries. What is the most Pythonic way to achieve this data structure?

    Read the article

  • Length-1 arrays can be converted to python scalars error? python

    - by Randy
    from numpy import * from pylab import * from math import * def LogisticMap(a,x): return 4.*a*x*(1.-x) def CosineMap(a,x): return a*cos(x/(2.*pi)) def TentMap(a,x): if x>= 0 or x<0.5: return 2.*a*x if x>=0.5 or x<=1.: return 2.*a*(1.-x) a = 0.98 N = 40 xaxis = arange(0.0,N,1.0) Func = CosineMap subplot(211) title(str(Func.func_name) + ' at a=%g and its second iterate' %a) ylabel('X(n+1)') # set y-axis label plot(xaxis,Func(a,xaxis), 'g', antialiased=True) subplot(212) ylabel('X(n+1)') # set y-axis label xlabel('X(n)') # set x-axis label plot(xaxis,Func(a,Func(a,xaxis)), 'bo', antialiased=True) My program is supposed to take any of the three defined functions and plot it. They all take in a value x from the array xaxis from 0 to N and then return the value. I want it to plot a graph of xaxis vs f(xaxis) with f being any of the three above functions. The logisticmap function works fine, but for CosineMap i get the error "only length-1 arrays can be converted to python scalars" and for TentMap i get error "The truth value of an array with more than one element is ambiguous, use a.any() or a.all()". My tent map function is suppose to return 2*a*x if 0<=x<0.5 and it's suppose to return 2*a*(1-x) if 0.5<=0<=1.

    Read the article

  • Python: problem with tiny script to delete files

    - by Rosarch
    I have a project that used to be under SVN, but now I'm moving it to Mercurial, so I want to clear out all the .svn files. It's a little too big to do it by hand, so I decided to write a python script to do it. But it isn't working. def cleandir(dir_path): print "Cleaning %s\n" % dir_path toDelete = [] files = os.listdir(dir_path) for filedir in files: print "considering %s" % filedir # continue if filedir == '.' or filedir == '..': print "skipping %s" % filedir continue path = dir_path + os.sep + filedir if os.path.isdir(path): cleandir(path) else: print "not dir: %s" % path if 'svn' in filedir: toDelete.append(path) print "Files to be deleted:" for candidate in toDelete: print candidate print "Delete all? [y|n]" choice = raw_input() if choice == 'y': for filedir in toDelete: if os.path.isdir(filedir): os.rmdir(filedir) else: os.unlink(filedir) exit() if __name__ == "__main__": cleandir(dir) The print statements show that it's only "considering" the filedirs whose names start with ".". However, if I uncomment the continue statement, all the filedirs are "considered". Why is this? Or is there some other utility that already exists to recursively de-SVN-ify a directory tree?

    Read the article

  • Python re module becomes 20 times slower when called on greater than 101 different regex

    - by Wiil
    My problem is about parsing log files and removing variable parts on each lines to be able to group them. For instance: s = re.sub(r'(?i)User [_0-9A-z]+ is ', r"User .. is ", s) s = re.sub(r'(?i)Message rejected because : (.*?) \(.+\)', r'Message rejected because : \1 (...)', s) I have about 120+ matching rules like those above. I have found no performances issues while searching successively on 100 different regex. But a huge slow down comes when applying 101 regex. Exact same behavior happens when replacing my rules set by for a in range(100): s = re.sub(r'(?i)caught here'+str(a)+':.+', r'( ... )', s) Got 20 times slower when putting range(101) instead. # range(100) % ./dashlog.py file.bz2 == Took 2.1 seconds. == # range(101) % ./dashlog.py file.bz2 == Took 47.6 seconds. == Why such thing is happening ? And is there any known workaround ? (Happens on Python 2.6.6/2.7.2 on Linux/Windows.)

    Read the article

  • Python logging in Django

    - by Jeff
    I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head. I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file: sviewlog = logging.getLogger('MyApp.views.scans') view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log') view_log_handler.setLevel(logging.INFO) view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')) sviewlog.addHandler(view_log_handler) Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this? Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.

    Read the article

  • Python coin-toss

    - by Andy
    i am new to Python, and i can't wrap my head around this. I have following function defined: def FlipCoins(num_flips): heads_rounds_won = 0 for i in range(10000): heads = 0 tails = 0 for j in range(num_flips): dice = random.randint(0,1) if dice==1: heads += 1 else: tails += 1 if heads > tails: heads_rounds_won += 1 return heads_rounds_won Here is what it should do (but apparently doesn't): flip a coin num_flip times, count heads and tails, and see if there are more heads than tails. If yes, increment head_rounds_won by 1. Repeat 10000 times. I would assume that head_rounds_won will approximate 5000 (50%). And it does that for odd numbers as input. For example, 3, 5 or 7 will produce about 50%. However, even numbers will produce much lower results, more like 34%. Small numbers especially, with higher even numbers, like for example 800, the difference to 50% is much narrower. Why is this the case? Shouldn't any input produce about 50% heads/tails?

    Read the article

  • Global name not defined error in Django/Python trying to set foreignkey

    - by Mark
    Summary: I define a method createPage within a file called PageTree.py that takes a Source model object and a string. The method tries to generate a Page model object. It tries to set the Page model object's foreignkey to refer to the Source model object which was passed in. This throws a NameError exception! I'm trying to represent a website which is structured like a tree. I define the Django models Page and Source, Page representing a node on the tree and Source representing the contents of the page. (You can probably skip over these, this is a basic tree implementation using doubly linked nodes). class Page(models.Model): name = models.CharField(max_length=50) parent = models.ForeignKey("self", related_name="children", null=True); firstChild = models.ForeignKey("self", related_name="origin", null=True); nextSibling = models.ForeignKey("self", related_name="prevSibling", null=True); previousSibling = models.ForeignKey("self", related_name="nxtSibling", null=True); source = models.ForeignKey("Source"); class Source(models.Model): #A source that is non dynamic will be refered to as a static source #Dynamic sources contain locations that are names of functions #Static sources contain locations that are places on disk name = models.CharField(primary_key=True, max_length=50) isDynamic = models.BooleanField() location = models.CharField(max_length=100); I've coded a python program called PageTree.py which allows me to request nodes from the database and manipulate the structure of the tree. Here is the trouble making method: def createPage(pageSource, pageName): page = Page() page.source = pageSource page.name = pageName page.save() return page I'm running this program in a shell through manage.py in Windows 7 manage.py shell from mysite.PageManager.models import Page, Source from mysite.PageManager.PageTree import * ... create someSource = Source(), populate the fields, and save it ... createPage(someSource, "test") ... NameError: global name 'source' is not defined When I type in the function definition for createPage into the shell by hand, the call works without error. This is driving me bonkers and help is appreciated.

    Read the article

  • Nested <ul><li> navigation menu using a recursive Python function

    - by Alex
    Hi. I want to render this data structure as an unordered list. menu = [ [1, 0], [2, 1], [3, 1], [4, 3], [5, 3], [6, 5], [7,1] ] [n][0] is the key [n][1] references the parent key The desired output is: <ul> <li>Node 1</li> <ul> <li>Node 2</li> <li>Node 3</li> <ul> <li>Node 4</li> <li>Node 5</li> <ul> <li>Node 6</li> </ul> </ul> <li>Node 7</li> </ul> </ul> I could probably do this without recursion but that would be no fun. Unfortunately, I am having problems putting it all together. I don't have much experience with recursion and this is proving to be very difficult for me to build and debug. What is the most efficient way to solve this problem in Python? Thanks!

    Read the article

  • Image Gurus: Optimize my Python PNG transparency function

    - by ozone
    I need to replace all the white(ish) pixels in a PNG image with alpha transparency. I'm using Python in AppEngine and so do not have access to libraries like PIL, imagemagick etc. AppEngine does have an image library, but is pitched mainly at image resizing. I found the excellent little pyPNG module and managed to knock up a little function that does what I need: make_transparent.py pseudo-code for the main loop would be something like: for each pixel: if pixel looks "quite white": set pixel values to transparent otherwise: keep existing pixel values and (assuming 8bit values) "quite white" would be: where each r,g,b value is greater than "240" AND each r,g,b value is within "20" of each other This is the first time I've worked with raw pixel data in this way, and although works, it also performs extremely poorly. It seems like there must be a more efficient way of processing the data without iterating over each pixel in this manner? (Matrices?) I was hoping someone with more experience in dealing with these things might be able to point out some of my more obvious mistakes/improvements in my algorithm. Thanks!

    Read the article

  • Faster quadrature decoder loops with Python code

    - by Kelei
    I'm working with a BeagleBone Black and using Adafruit's IO Python library. Wrote a simple quadrature decoding function and it works perfectly fine when the motor runs at about 1800 RPM. But when the motor runs at higher speeds, the code starts missing some of the interrupts and the encoder counts start to accumulate errors. Do you guys have any suggestions as to how I can make the code more efficient or if there are functions which can cycle the interrupts at a higher frequency. Thanks, Kel Here's the code: # Define encoder count function def encodercount(term): global counts global Encoder_A global Encoder_A_old global Encoder_B global Encoder_B_old global error Encoder_A = GPIO.input('P8_7') # stores the value of the encoders at time of interrupt Encoder_B = GPIO.input('P8_8') if Encoder_A == Encoder_A_old and Encoder_B == Encoder_B_old: # this will be an error error += 1 print 'Error count is %s' %error elif (Encoder_A == 1 and Encoder_B_old == 0) or (Encoder_A == 0 and Encoder_B_old == 1): # this will be clockwise rotation counts += 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) elif (Encoder_A == 1 and Encoder_B_old == 1) or (Encoder_A == 0 and Encoder_B_old == 0): # this will be counter-clockwise rotation counts -= 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) else: #this will be an error as well error += 1 print 'Error count is %s' %error Encoder_A_old = Encoder_A # store the current encoder values as old values to be used as comparison in the next loop Encoder_B_old = Encoder_B # Initialize the interrupts - these trigger on the both the rising and falling GPIO.add_event_detect('P8_7', GPIO.BOTH, callback = encodercount) # Encoder A GPIO.add_event_detect('P8_8', GPIO.BOTH, callback = encodercount) # Encoder B # This is the part of the code which runs normally in the background while True: time.sleep(1)

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >