Search Results

Search found 19664 results on 787 pages for 'python for ever'.

Page 125/787 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Python FTP grabbing and saving images issue

    - by PylonsN00b
    OK So I have been messing with this all day long. I am fairly new to Python FTP. So I have searched through here and came up w/ this: images = notions_ftp.nlst() for image_name in image_names: if found_url == False: try: for image in images: ftp_image_name = "./%s" % image_name if ftp_image_name == image: found_url = True image_name_we_want = image_name except: pass # We failed to find an image for this product, it will have to be done manually if found_url == False: log.info("Image ain't there baby -- SKU: %s" % sku) return False # Hey we found something! Open the image.... notions_ftp.retrlines("RETR %s" % image_name_we_want, open(image_name_we_want, "rb")) 1/0 So I have narrowed the error down to the line before I divide by zero. Here is the error: Traceback (most recent call last): File "<console>", line 6, in <module> File "<console>", line 39, in insert_image IOError: [Errno 2] No such file or directory: '411483CC-IT,IM.jpg' So if you follow the code you will see that the image IS in the directory because image_name_we_want is set if found in that directory listing on the first line of my code. And I KNOW it's there because I am looking at the FTP site myself and ...it's freakin there. So at some point during all of this I got the image to save locally, which is most desired, but I have long since forgot what I used to make it do that. Either way, why does it think that the image isn't there when it clearly finds it in the listing.

    Read the article

  • Python: Serial Transmission

    - by Silent Elektron
    I have an image stack of 500 images (jpeg) of 640x480. I intend to make 500 pixels (1st pixels of all images) as a list and then send that via COM1 to FPGA where I do my further processing. I have a couple of questions here: How do I import all the 500 images at a time into python and how do i store it? How do I send the 500 pixel list via COM1 to FPGA? I tried the following: Converted the jpeg image to intensity values (each pixel is denoted by a number between 0 and 255) in MATLAB, saved the intensity values in a text file, read that file using readlines(). But it became too cumbersome to make the intensity value files for all the 500 images! Used NumPy to put the read files in a matrix and then pick the first pixel of all images. But when I send it, its coming like: [56, 61, 78, ... ,71, 91]. Is there a way to eliminate the [ ] and , while sending the data serially? Thanks in Advance! :)

    Read the article

  • Python Multiprocessing Question

    - by Avan
    My program has 2 parts divided into the core and downloader. The core handles all the app logic while the downloader just downloads urls. Right now, I am trying to use the python multiprocessing module to accomplish the task of the core as a process and the downloader as a process. The first problem I noticed was that if I spawn the downloader process from the core process so that the downloader is the child process and the core is the parent, the core process(parent) is blocked until the child is finished. I do not want this behaivor though. I would like to have a core process and a downloader process that are both able to execute their code and communicate between each other. example ... def main(): jobQueue = Queue() jobQueue.put("http://google.com) d = Downloader(jobQueue) p = Process(target=d.start()) p.start() if __name__ == '__main__': freeze_support() main() where Downloader's start() just takes the url out of the queue and downloads it. In order to have the 2 processes unblocked, would I need to create 2 processes from the parent process and then share something between them?

    Read the article

  • Python: Errors saving and loading objects with pickle module

    - by Peterstone
    Hi, I am trying to load and save objects with this piece of code I get it from a question I asked a week ago: Python: saving and loading objects and using pickle. The piece of code is this: class Fruits: pass banana = Fruits() banana.color = 'yellow' banana.value = 30 import pickle filehandler = open("Fruits.obj","wb") pickle.dump(banana,filehandler) filehandler.close() file = open("Fruits.obj",'rb') object_file = pickle.load(file) file.close() print(object_file.color, object_file.value, sep=', ') At a first glance the piece of code works well, getting load and see the 'color' and 'value' of the saved object. But, what I pursuit is to close a session, open a new one and load what I save in a past session. I close the session after putting the line filehandler.close() and I open a new one and I put the rest of your code, then after putting object_file = pickle.load(file) I get this error: Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> object_file = pickle.load(file) File "C:\Python31\lib\pickle.py", line 1365, in load encoding=encoding, errors=errors).load() AttributeError: 'module' object has no attribute 'Fruits' Can anyone explain me what this error message means and telling me how to solve this problem? Thank so much and happy new year!!

    Read the article

  • How do I override a python import?

    - by Evan Plaice
    So I'm working on pypreprocessor which is a preprocessor that takes c-style directives and I've been able to make it work like a traditional preprocessor (it's self-consuming and executes postprocessed code on-the-fly) except that it breaks library imports. The problem is. The preprocessor runs through the file, processes' it, outputs to a temp file, and exec() the temp file. Libraries that are imported need to be handled a little different because they aren't executed but rather loaded and made accessible to the caller module. What I need to be able to do is. Interrupt the import (since the preprocessor is being run in the middle of the import), load the postprocessed code as a tempModule, and replace the original import with the tempModule to trick the calling script with the import into believing that the tempModule is the original module. I have searched everywhere and so far, have no solution. This question is the closest I've seen so far to providing an answer: http://stackoverflow.com/questions/1096216/override-namespace-in-python Here's what I have. # remove the bytecode file created by the first import os.remove(moduleName + '.pyc') # remove the first import del sys.modules[moduleName] # import the postprocessed module tmpModule = __import__(tmpModuleName) # set first module's reference to point to the preprocessed module sys.modules[moduleName] = tmpModule moduleName is the name of the original module, tmpModuleName is the name of the postprocessed code file. The strange part is, this solution still runs completely normal as if the first module completed loaded normally; unless you remove the last line, then you get a module not found error. Hopefully someone on SO know a lot more about imports than I do because this one has me stumped.

    Read the article

  • Calculating a total cost in Python

    - by Sérgio Lourenço
    I'm trying to create a trip planner in python, but after I defined all the functions I'm not able to call and calculate them in the last function tripCost(). In tripCost, I want to put the days and travel destination (city) and the program runs the functions and gives me the exact result of all the 3 functions previously defined. Code: def hotelCost(): days = raw_input ("How many nights will you stay at the hotel?") total = 140 * int(days) print "The total cost is",total,"dollars" def planeRideCost(): city = raw_input ("Wich city will you travel to\n") if city == 'Charlotte': return "The cost is 183$" elif city == 'Tampa': return "The cost is 220$" elif city == 'Pittsburgh': return "The cost is 222$" elif city == 'Los Angeles': return "The cost is 475$" else: return "That's not a valid destination" def rentalCarCost(): rental_days = raw_input ("How many days will you rent the car\n") discount_3 = 40 * int(rental_days) * 0.2 discount_7 = 40 * int(rental_days) * 0.5 total_rent3 = 40 * int(rental_days) - discount_3 total_rent7 = 40 * int(rental_days) - discount_7 cost_day = 40 * int(rental_days) if int(rental_days) >= 3: print "The total cost is", total_rent3, "dollars" elif int(rental_days) >= 7: print "The total cost is", total_rent7, "dollars" else: print "The total cost is", cost_day, "dollars" def tripCost(): travel_city = raw_input ("What's our destination\n") days_travel = raw_input ("\nHow many days will you stay\n") total_trip_cost = hotelCost(int(day_travel)) + planeRideCost (str(travel_city)) + rentalCost (int(days_travel)) return "The total cost with the trip is", total_trip_cost tripCost()

    Read the article

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • Python 2.7.3 memory error

    - by Tom Baker
    I have a specific case with python code. Every time I run the code, the RAM memory is increasing until it reaches 1.8 gb and crashes. import itertools import csv import pokersleuth cards = ['2s', '3s', '4s', '5s', '6s', '7s', '8s', '9s', 'Ts', 'Js', 'Qs', 'Ks', 'As', '2h', '3h', '4h', '5h', '6h', '7h', '8h', '9h', 'Th', 'Jh', 'Qh', 'Kh', 'Ah', '2c', '3c', '4c', '5c', '6c', '7c', '8c', '9c', 'Tc', 'Jc', 'Qc', 'Kc', 'Ac', '2d', '3d', '4d', '5d', '6d', '7d', '8d', '9d', 'Td', 'Jd', 'Qd', 'Kd', 'Ad'] flop = itertools.combinations(cards,3) a1 = 'Ks' ; a2 = 'Qs' b1 = 'Jc' ; b2 = 'Jd' cards1 = a1+a2 cards2 = b1+b2 number = 0 n=0 m=0 for row1 in flop: if (row1[0] <> a1 and row1[0] <>a2 and row1[0] <>b1 and row1[0] <>b2) and (row1[1] <> a1 and row1[1] <>a2 and row1[1] <>b1 and row1[1] <>b2) and (row1[2] <> a1 and row1[2] <> a2 and row1[2] <> b1 and row1[2] <> b2): for row2 in cards: if (row2 <> a1 and row2 <> a2 and row2 <> b1 and row2 <> b2 and row2 <> row1[0] and row2 <> row1[1] and row2 <> row1[2]): s = pokersleuth.compute_equity(row1[0]+row1[1]+row1[2]+row2, (cards1, cards2)) if s[0]>=0.5: number +=1 del s[:] del s[:] print number/45.0 number = 0 n+=1

    Read the article

  • Unable to retrieve search results from server side : Facebook Graph API usig Python

    - by DjangoRocks
    Hi all, I'm doing some simple Python + FB Graph training on my own, and I faced a weird problem: import time import sys import urllib2 import urllib from json import loads base_url = "https://graph.facebook.com/search?q=" post_id = None post_type = None user_id = None message = None created_time = None def doit(hour): page = 1 search_term = "\"Plastic Planet\"" encoded_search_term = urllib.quote(search_term) print encoded_search_term type="&type=post" url = "%s%s%s" % (base_url,encoded_search_term,type) print url while(1): try: response = urllib2.urlopen(url) except urllib2.HTTPError, e: print e finally: pass content = response.read() content = loads(content) print "==================================" for c in content["data"]: print c print "****************************************" try: content["paging"] print "current URL" print url print "next page!------------" url = content["paging"]["next"] print url except: pass finally: pass """ print "new URL is =======================" print url print "==================================" """ print url What I'm trying to do here is to automatically page through the search results, but trying for content["paging"]["next"] But the weird thing is that no data is returned; i received the following: {"data":[]} Even in the very first loop. But when i copied the URL into a browser, a lot of results were returned. I've also tried a version with my access token and th same thing happens. Can anyone enlighten me? +++++++++++++++++++EDITED and SIMPLIFIED++++++++++++++++++ ok thanks to TryPyPy, here's the simplified and edited version of my previous question: Why is that: import urllib2 url = "https://graph.facebook.com/searchq=%22Plastic+Planet%22&type=post&limit=25&until=2010-12-29T19%3A54%3A56%2B0000" response = urllib2.urlopen(url) print response.read() result in {"data":[]} ? But the same url produces a lot of data in a browser? Anyone? Best Regards.

    Read the article

  • Why can't I handle a KeyboardInterrupt in python?

    - by Josh
    I'm writing python 2.6.6 code on windows that looks like this: try: dostuff() except KeyboardInterrupt: print "Interrupted!" except: print "Some other exception?" finally: print "cleaning up...." print "done." dostuff() is a function that loops forever, reading a line at a time from an input stream and acting on it. I want to be able to stop it and clean up when I hit ctrl-c. What's happening instead is that the code under except KeyboardInterrupt: isn't running at all. The only thing that gets printed is "cleaning up...", and then a traceback is printed that looks like this: Traceback (most recent call last): File "filename.py", line 119, in <module> print 'cleaning up...' KeyboardInterrupt So, exception handling code is NOT running, and the traceback claims that a KeyboardInterrupt occurred during the finally clause, which doesn't make sense because hitting ctrl-c is what caused that part to run in the first place! Even the generic except: clause isn't running. EDIT: Based on the comments, I replaced the contents of the try: block with sys.stdin.read(). The problem still occurs exactly as described, with the first line of the finally: block running and then printing the same traceback.

    Read the article

  • Very simple python functions takes spends long time in function and not subfunctions

    - by John Salvatier
    I have spent many hours trying to figure what is going on here. The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening? Have I done something obviously inefficient? Am I misunderstanding how cProfile works? def grad_logp(self, variable, calculation_set ): p = params(self.p,self.parents) return self.n(variable, self.p) def n (self, variable, p ): gradient = self.gg(variable, p) return np.reshape(gradient, np.shape(variable.value)) def gg(self, variable, p): if variable is self: gradient = self._grad_logps['x']( x = self.value, **p) else: gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()]) return gradient

    Read the article

  • Length-1 arrays can be converted to python scalars error? python

    - by Randy
    from numpy import * from pylab import * from math import * def LogisticMap(a,x): return 4.*a*x*(1.-x) def CosineMap(a,x): return a*cos(x/(2.*pi)) def TentMap(a,x): if x>= 0 or x<0.5: return 2.*a*x if x>=0.5 or x<=1.: return 2.*a*(1.-x) a = 0.98 N = 40 xaxis = arange(0.0,N,1.0) Func = CosineMap subplot(211) title(str(Func.func_name) + ' at a=%g and its second iterate' %a) ylabel('X(n+1)') # set y-axis label plot(xaxis,Func(a,xaxis), 'g', antialiased=True) subplot(212) ylabel('X(n+1)') # set y-axis label xlabel('X(n)') # set x-axis label plot(xaxis,Func(a,Func(a,xaxis)), 'bo', antialiased=True) My program is supposed to take any of the three defined functions and plot it. They all take in a value x from the array xaxis from 0 to N and then return the value. I want it to plot a graph of xaxis vs f(xaxis) with f being any of the three above functions. The logisticmap function works fine, but for CosineMap i get the error "only length-1 arrays can be converted to python scalars" and for TentMap i get error "The truth value of an array with more than one element is ambiguous, use a.any() or a.all()". My tent map function is suppose to return 2*a*x if 0<=x<0.5 and it's suppose to return 2*a*(1-x) if 0.5<=0<=1.

    Read the article

  • Dictionaries with more than one key per value in Python

    - by nickname
    I am attempting to create a nice interface to access a data set where each value has several possible keys. For example, suppose that I have both a number and a name for each value in the data set. I want to be able to access each value using either the number OR the name. I have considered several possible implementations: Using two separate dictionaries, one for the data values organized by number, and one for the data values organized by name. Simply assigning two keys to the same value in a dictionary. Creating dictionaries mapping each name to the corresponding number, and vice versa Attempting to create a hash function that maps each name to a number, etc. (related to the above) Creating an object to encapsulate all three pieces of data, then using one key to map dictionary keys to the objects and simply searching the dictionary to map the other key to the object. None of these seem ideal. The first seems ugly and unmaintainable. The second also seems fragile. The third/fourth seem plausible, but seem to require either much manual specification or an overly complex implementation. Finally, the fifth loses constant-time performance for one of the lookups. In C/C++, I believe that I would use pointers to reference the same piece of data from different keys. I know that the problem is rather similar to a database lookup problem by a non-key column, however, I would like (if possible), to maintain the approximate O(1) performance of Python dictionaries. What is the most Pythonic way to achieve this data structure?

    Read the article

  • Python: problem with tiny script to delete files

    - by Rosarch
    I have a project that used to be under SVN, but now I'm moving it to Mercurial, so I want to clear out all the .svn files. It's a little too big to do it by hand, so I decided to write a python script to do it. But it isn't working. def cleandir(dir_path): print "Cleaning %s\n" % dir_path toDelete = [] files = os.listdir(dir_path) for filedir in files: print "considering %s" % filedir # continue if filedir == '.' or filedir == '..': print "skipping %s" % filedir continue path = dir_path + os.sep + filedir if os.path.isdir(path): cleandir(path) else: print "not dir: %s" % path if 'svn' in filedir: toDelete.append(path) print "Files to be deleted:" for candidate in toDelete: print candidate print "Delete all? [y|n]" choice = raw_input() if choice == 'y': for filedir in toDelete: if os.path.isdir(filedir): os.rmdir(filedir) else: os.unlink(filedir) exit() if __name__ == "__main__": cleandir(dir) The print statements show that it's only "considering" the filedirs whose names start with ".". However, if I uncomment the continue statement, all the filedirs are "considered". Why is this? Or is there some other utility that already exists to recursively de-SVN-ify a directory tree?

    Read the article

  • Python re module becomes 20 times slower when called on greater than 101 different regex

    - by Wiil
    My problem is about parsing log files and removing variable parts on each lines to be able to group them. For instance: s = re.sub(r'(?i)User [_0-9A-z]+ is ', r"User .. is ", s) s = re.sub(r'(?i)Message rejected because : (.*?) \(.+\)', r'Message rejected because : \1 (...)', s) I have about 120+ matching rules like those above. I have found no performances issues while searching successively on 100 different regex. But a huge slow down comes when applying 101 regex. Exact same behavior happens when replacing my rules set by for a in range(100): s = re.sub(r'(?i)caught here'+str(a)+':.+', r'( ... )', s) Got 20 times slower when putting range(101) instead. # range(100) % ./dashlog.py file.bz2 == Took 2.1 seconds. == # range(101) % ./dashlog.py file.bz2 == Took 47.6 seconds. == Why such thing is happening ? And is there any known workaround ? (Happens on Python 2.6.6/2.7.2 on Linux/Windows.)

    Read the article

  • Global name not defined error in Django/Python trying to set foreignkey

    - by Mark
    Summary: I define a method createPage within a file called PageTree.py that takes a Source model object and a string. The method tries to generate a Page model object. It tries to set the Page model object's foreignkey to refer to the Source model object which was passed in. This throws a NameError exception! I'm trying to represent a website which is structured like a tree. I define the Django models Page and Source, Page representing a node on the tree and Source representing the contents of the page. (You can probably skip over these, this is a basic tree implementation using doubly linked nodes). class Page(models.Model): name = models.CharField(max_length=50) parent = models.ForeignKey("self", related_name="children", null=True); firstChild = models.ForeignKey("self", related_name="origin", null=True); nextSibling = models.ForeignKey("self", related_name="prevSibling", null=True); previousSibling = models.ForeignKey("self", related_name="nxtSibling", null=True); source = models.ForeignKey("Source"); class Source(models.Model): #A source that is non dynamic will be refered to as a static source #Dynamic sources contain locations that are names of functions #Static sources contain locations that are places on disk name = models.CharField(primary_key=True, max_length=50) isDynamic = models.BooleanField() location = models.CharField(max_length=100); I've coded a python program called PageTree.py which allows me to request nodes from the database and manipulate the structure of the tree. Here is the trouble making method: def createPage(pageSource, pageName): page = Page() page.source = pageSource page.name = pageName page.save() return page I'm running this program in a shell through manage.py in Windows 7 manage.py shell from mysite.PageManager.models import Page, Source from mysite.PageManager.PageTree import * ... create someSource = Source(), populate the fields, and save it ... createPage(someSource, "test") ... NameError: global name 'source' is not defined When I type in the function definition for createPage into the shell by hand, the call works without error. This is driving me bonkers and help is appreciated.

    Read the article

  • Python coin-toss

    - by Andy
    i am new to Python, and i can't wrap my head around this. I have following function defined: def FlipCoins(num_flips): heads_rounds_won = 0 for i in range(10000): heads = 0 tails = 0 for j in range(num_flips): dice = random.randint(0,1) if dice==1: heads += 1 else: tails += 1 if heads > tails: heads_rounds_won += 1 return heads_rounds_won Here is what it should do (but apparently doesn't): flip a coin num_flip times, count heads and tails, and see if there are more heads than tails. If yes, increment head_rounds_won by 1. Repeat 10000 times. I would assume that head_rounds_won will approximate 5000 (50%). And it does that for odd numbers as input. For example, 3, 5 or 7 will produce about 50%. However, even numbers will produce much lower results, more like 34%. Small numbers especially, with higher even numbers, like for example 800, the difference to 50% is much narrower. Why is this the case? Shouldn't any input produce about 50% heads/tails?

    Read the article

  • Python logging in Django

    - by Jeff
    I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head. I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file: sviewlog = logging.getLogger('MyApp.views.scans') view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log') view_log_handler.setLevel(logging.INFO) view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')) sviewlog.addHandler(view_log_handler) Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this? Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.

    Read the article

  • Faster quadrature decoder loops with Python code

    - by Kelei
    I'm working with a BeagleBone Black and using Adafruit's IO Python library. Wrote a simple quadrature decoding function and it works perfectly fine when the motor runs at about 1800 RPM. But when the motor runs at higher speeds, the code starts missing some of the interrupts and the encoder counts start to accumulate errors. Do you guys have any suggestions as to how I can make the code more efficient or if there are functions which can cycle the interrupts at a higher frequency. Thanks, Kel Here's the code: # Define encoder count function def encodercount(term): global counts global Encoder_A global Encoder_A_old global Encoder_B global Encoder_B_old global error Encoder_A = GPIO.input('P8_7') # stores the value of the encoders at time of interrupt Encoder_B = GPIO.input('P8_8') if Encoder_A == Encoder_A_old and Encoder_B == Encoder_B_old: # this will be an error error += 1 print 'Error count is %s' %error elif (Encoder_A == 1 and Encoder_B_old == 0) or (Encoder_A == 0 and Encoder_B_old == 1): # this will be clockwise rotation counts += 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) elif (Encoder_A == 1 and Encoder_B_old == 1) or (Encoder_A == 0 and Encoder_B_old == 0): # this will be counter-clockwise rotation counts -= 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) else: #this will be an error as well error += 1 print 'Error count is %s' %error Encoder_A_old = Encoder_A # store the current encoder values as old values to be used as comparison in the next loop Encoder_B_old = Encoder_B # Initialize the interrupts - these trigger on the both the rising and falling GPIO.add_event_detect('P8_7', GPIO.BOTH, callback = encodercount) # Encoder A GPIO.add_event_detect('P8_8', GPIO.BOTH, callback = encodercount) # Encoder B # This is the part of the code which runs normally in the background while True: time.sleep(1)

    Read the article

  • importing symbols from python package into caller's namespace

    - by Paul C
    I have a little internal DSL written in a single Python file that has grown to a point where I would like to split the contents across a number of different directories + files. The new directory structure currently looks like this: dsl/ __init__.py types/ __init__.py type1.py type2.py and each type file contains a class (e.g. Type1). My problem is that I would like to keep the implementation of code that uses this DSL as simple as possible, something like: import dsl x = Type1() ... This means that all of the important symbols should be available directly in the user's namespace. I have tried updating the top-level __init__.py file to import the relevant symbols: from types.type1 import Type1 from types.type2 import Type2 ... print globals() the output shows that the symbols are imported correctly, but they still aren't present in the caller's code (the code that's doing the import dsl). I think that the problem is that the symbols are actually being imported to the 'dsl' namespace. How can I change this so that the classes are also directly available in the caller's namespace?

    Read the article

  • Nested <ul><li> navigation menu using a recursive Python function

    - by Alex
    Hi. I want to render this data structure as an unordered list. menu = [ [1, 0], [2, 1], [3, 1], [4, 3], [5, 3], [6, 5], [7,1] ] [n][0] is the key [n][1] references the parent key The desired output is: <ul> <li>Node 1</li> <ul> <li>Node 2</li> <li>Node 3</li> <ul> <li>Node 4</li> <li>Node 5</li> <ul> <li>Node 6</li> </ul> </ul> <li>Node 7</li> </ul> </ul> I could probably do this without recursion but that would be no fun. Unfortunately, I am having problems putting it all together. I don't have much experience with recursion and this is proving to be very difficult for me to build and debug. What is the most efficient way to solve this problem in Python? Thanks!

    Read the article

  • List modification in Python

    - by user2945143
    We are given an algorithm to modify a list of numbers from 1 to 28. There are 5 steps in the algorithm. We have written functions for each step (all correct). We need to write a function that combines all 5 steps. The algorithm modifies the list to get a value. Each time you get a new value, you use the list created by the algorithm from the previous step. This is what we have gotten so far for the code: get_card_at_top_index(insert_top_to_bottom(triple_cut((move_joker_2( move_joker_1(deck)))))) When we run the code to generate the get_card_at_top_index, the first answer is correct. However, the rest are not. Instead of using from the new list, python uses the value that it generated from the last step. What did we do wrong? UPDATE: The other 5 codes passed the tests, they are correct. Code 1 (List) = list1 Code 2 (list1) = list2 Code 3 (list2) = list3 Code 4 (list3) = list4 Code 5 (list4) = list5 we generate a number from 5. We need to run the algorithm again to generate 25 more numbers. We will use list 5 start from step 1

    Read the article

  • Image Gurus: Optimize my Python PNG transparency function

    - by ozone
    I need to replace all the white(ish) pixels in a PNG image with alpha transparency. I'm using Python in AppEngine and so do not have access to libraries like PIL, imagemagick etc. AppEngine does have an image library, but is pitched mainly at image resizing. I found the excellent little pyPNG module and managed to knock up a little function that does what I need: make_transparent.py pseudo-code for the main loop would be something like: for each pixel: if pixel looks "quite white": set pixel values to transparent otherwise: keep existing pixel values and (assuming 8bit values) "quite white" would be: where each r,g,b value is greater than "240" AND each r,g,b value is within "20" of each other This is the first time I've worked with raw pixel data in this way, and although works, it also performs extremely poorly. It seems like there must be a more efficient way of processing the data without iterating over each pixel in this manner? (Matrices?) I was hoping someone with more experience in dealing with these things might be able to point out some of my more obvious mistakes/improvements in my algorithm. Thanks!

    Read the article

  • python list Index out of range error

    - by dman762000
    I am working on a python tetris game that my proffessor assigned for the final project of a concepts of programming class. I have got just about everything he wanted to work on it at this point but I am having a slight problem with one part of it. Whenever I start moving pieces left and right I keep getting "index out of range error". This only happens when it is up against a piece. Here are the culprits that are giving me grief. def clearRight(block=None): global board, activeBlock, stackedBlocks isClear = True if(block == None): block = activeBlock if(block != None): for square in block['squares']: row = square[1] col = square[0]+1 if(col >= 0 and stackedBlocks[row][col] !=None): isClear=False return isClear def clearLeft(block=None): global board, activeBlock, stackedBlocks isClear = True if(block == None): block = activeBlock if(block != None): for square in block['squares']: row = square[1] col = square[0]-1 if(col >= 0 and stackedBlocks[row][col] !=None): isClear=False return isClear I am not looking to get anyone to fix it for me, I'm only looking for tips on how to fix it myself. Thanks in advance for any help that is given.

    Read the article

  • Python-based password tracker (or dictionary)

    - by Arrieta
    Hello: Where we work we need to remember about 10 long passwords which need to change every so often. I would like to create a utility which can potentially save these passwords in an encrypted file so that we can keep track of them. I can think of some sort of dictionary passwd = {'host1':'pass1', 'host2':'pass2'}, etc, but I don't know what to do about encryption (absolutely zero experience in the topic). So, my question is really two questions: Is there a Linux-based utility which lets you do that? If you were to program it in Python, how would you go about it? A perk of approach two, would be for the software to update the ssh public keys after the password has been changed (you know the pain of updating ~15 tokens once you change your password). As it can be expected, I have zero control over the actual network configuration and the management of scp keys. I can only hope to provide a simple utility to me an my very few coworkers so that, if we need to, we can retrieve a password on demand. Cheers.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >