Search Results

Search found 14486 results on 580 pages for 'python idle'.

Page 77/580 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Python template engine

    - by jturo
    Hello there, Could it be possible if somebody could help me get started in writing a python template engine? I'm new to python and as I learn the language I've managed to write a little MVC framework running in its own light-weight-WSGI-like server. I've managed to write a script that finds and replaces keys for values: (Obviously this is not how my script is structured or implemented. this is just an example) from string import Template html = '<html>\n' html += ' <head>\n' html += ' <title>This is so Cool : In Controller HTML</title>\n' html += ' </head>\n' html += ' <body>\n' html += ' Home | <a href="/hi">Hi ${name}</a>\n' html += ' </body>\n' html += '<html>' Template(html).safe_substitute(dict(name = 'Arturo')) My next goal is to implement custom statements, modifiers, functions, etc (like 'for' loop) but i don't really know if i should use another module that i don't know about. I thought of regular expressions but i kind feel like that wouldn't be an efficient way of doing it Any help is appreciated, and i'm sure this will be of use to other people too. Thank you

    Read the article

  • Python byte per byte XOR decryption

    - by neurino
    I have an XOR encypted file by a VB.net program using this function to scramble: Public Class Crypter ... 'This Will convert String to bytes, then call the other function. Public Function Crypt(ByVal Data As String) As String Return Encoding.Default.GetString(Crypt(Encoding.Default.GetBytes(Data))) End Function 'This calls XorCrypt giving Key converted to bytes Public Function Crypt(ByVal Data() As Byte) As Byte() Return XorCrypt(Data, Encoding.Default.GetBytes(Me.Key)) End Function 'Xor Encryption. Private Function XorCrypt(ByVal Data() As Byte, ByVal Key() As Byte) As Byte() Dim i As Integer If Key.Length <> 0 Then For i = 0 To Data.Length - 1 Data(i) = Data(i) Xor Key(i Mod Key.Length) Next End If Return Data End Function End Class and saved this way: Dim Crypter As New Cryptic(Key) 'open destination file Dim objWriter As New StreamWriter(fileName) 'write crypted content objWriter.Write(Crypter.Crypt(data)) Now I have to reopen the file with Python but I have troubles getting single bytes, this is the XOR function in python: def crypto(self, data): 'crypto(self, data) -> str' return ''.join(chr((ord(x) ^ ord(y)) % 256) \ for (x, y) in izip(data.decode('utf-8'), cycle(self.key)) I had to add the % 256 since sometimes x is 256 i.e. not a single byte. This thing of two bytes being passed does not break the decryption because the key keeps "paired" with the following data. The problem is some decrypted character in the conversion is wrong. These chars are all accented letters like à, è, ì but just a few of the overall accented letters. The others are all correctly restored. I guess it could be due to the 256 mod but without it I of course get a chr exception... Thanks for your support

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • deepcopy and python - tips to avoid using it?

    - by blackkettle
    Hi, I have a very simple python routine that involves cycling through a list of roughly 20,000 latitude,longitude coordinates and calculating the distance of each point to a reference point. def compute_nearest_points( lat, lon, nPoints=5 ): """Find the nearest N points, given the input coordinates.""" points = session.query(PointIndex).all() oldNearest = [] newNearest = [] for n in xrange(nPoints): oldNearest.append(PointDistance(None,None,None,99999.0,99999.0)) newNearest.append(obj2) #This is almost certainly an inappropriate use of deepcopy # but how SHOULD I be doing this?!?! for point in points: distance = compute_spherical_law_of_cosines( lat, lon, point.avg_lat, point.avg_lon ) k = 0 for p in oldNearest: if distance < p.distance: newNearest[k] = PointDistance( point.point, point.kana, point.english, point.avg_lat, point.avg_lon, distance=distance ) break else: newNearest[k] = deepcopy(oldNearest[k]) k += 1 for j in range(k,nPoints-1): newNearest[j+1] = deepcopy(oldNearest[j]) oldNearest = deepcopy(newNearest) #We're done, now print the result for point in oldNearest: print point.station, point.english, point.distance return I initially wrote this in C, using the exact same approach, and it works fine there, and is basically instantaneous for nPoints<=100. So I decided to port it to python because I wanted to use SqlAlchemy to do some other stuff. I first ported it without the deepcopy statements that now pepper the method, and this caused the results to be 'odd', or partially incorrect, because some of the points were just getting copied as references(I guess? I think?) -- but it was still pretty nearly as fast as the C version. Now with the deepcopy calls added, the routine does it's job correctly, but it has incurred an extreme performance penalty, and now takes several seconds to do the same job. This seems like a pretty common job, but I'm clearly not doing it the pythonic way. How should I be doing this so that I still get the correct results but don't have to include deepcopy everywhere?

    Read the article

  • python 'with' statement

    - by Stephen
    Hi, I'm using Python 2.5. I'm trying to use this 'with' statement. from __future__ import with_statement a = [] with open('exampletxt.txt','r') as f: while True: a.append(f.next().strip().split()) print a The contents of 'exampletxt.txt' are simple: a b In this case, I get the error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/tmp/python-7036sVf.py", line 5, in <module> a.append(f.next().strip().split()) StopIteration And if I replace f.next() with f.read(), it seems to be caught in an infinite loop. I wonder if I have to write a decorator class that accepts the iterator object as an argument, and define an __exit__ method for it? I know it's more pythonic to use a for-loop for iterators, but I wanted to implement a while loop within a generator that's called by a for-loop... something like def g(f): while True: x = f.next() if test(x): a = x elif test(x): b = f.next() yield [a,x,b] a = [] with open(filename) as f: for x in g(f): a.append(x)

    Read the article

  • [Python] name 'OptionGroup' is not defined

    - by Cawas
    Ok, so I made this rookie mistake below, but in my defense I was led to it thanks to how the help about this subject is on python docs, which states how to use optparse. It is actually an error under the gigantic tutorial section. In contrast and to my offense, I may be one of the very few stupid people who can't read very well and pay close attention on what I do. But since this took me so long to discover, I wanted to "document" it here: Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) >>> from optparse import OptionParser >>> outputGroup = OptionGroup(parser, 'Output handling') Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'OptionGroup' is not defined This is strictly done with examples found on the docs, and you can't find anything about it anywhere, be it that long long docs page, google or stackoverflow. Plus, reading optparse.py shows OptionGroup is there, so that adds to the confusion. I bet it will take less than 1 minute for someone to spot my error. For that I'll only add proper tags and / or modify the title later on. :)

    Read the article

  • How to write a custom solution using a python package, modules etc

    - by morpheous
    I am writing a packacge foobar which consists of the modules alice, bob, charles and david. From my understanding of Python packages and modules, this means I will create a folder foobar, with the following subdirectories and files (please correct if I am wrong) foobar/ __init__.py alice/alice.py bob/bob.py charles/charles.py david/david.py The package should be executable, so that in addition to making the modules alice, bob etc available as 'libraries', I should also be able to use foobar in a script like this: python foobar --args=someargs Question1: Can a package be made executable and used in a script like I described above? Question 2 The various modules will use code that I want to refactor into a common library. Does that mean creating a new sub directory 'foobar/common' and placing common.py in that folder? Question 3 How will the modules foo import the common module ? Is it 'from foobar import common' or can I not use this since these modules are part of the package? Question 4 I want to add logic for when the foobar package is being used in a script (assuming this can be done - I have only seen it done for modules) The code used is something like: if __name__ == "__main__": dosomething() where (in which file) would I put this logic ?

    Read the article

  • python histogram one-liner

    - by mykhal
    there are many ways, how to code histogram in Python. by histogram, i mean function, counting objects in an interable, resulting in the count table (i.e. dict). e.g.: >>> L = 'abracadabra' >>> histogram(L) {'a': 5, 'b': 2, 'c': 1, 'd': 1, 'r': 2} it can be written like this: def histogram(L): d = {} for x in L: if x in d: d[x] += 1 else: d[x] = 1 return d ..however, there are much less ways, how do this in a single expression. if we had "dict comprehensions" in python, we would write: >>> { x: L.count(x) for x in set(L) } but we don't have them, so we have to write: >>> dict([(x, L.count(x)) for x in set(L)]) however, this approach may yet be readable, but is not efficient - L is walked-through multiple times, so this won't work for single-life generators.. the function should iterate well also through gen(), where: def gen(): for x in L: yield x we can go with reduce (R.I.P.): >>> reduce(lambda d,x: dict(d, x=d.get(x,0)+1), L, {}) # wrong! oops, does not work, the key name is 'x', not x :( i ended with: >>> reduce(lambda d,x: dict(d.items() + [(x, d.get(x, 0)+1)]), L, {}) (in py3k, we would have to write list(d.items()) instead of d.items(), but it's hypothethical, since there is no reduce there) please beat me with a better one-liner, more readable! ;)

    Read the article

  • Python OOP and lists

    - by Mikk
    Hi, I'm new to Python and it's OOP stuff and can't get it to work. Here's my code: class Tree: root = None; data = []; def __init__(self, equation): self.root = equation; def appendLeft(self, data): self.data.insert(0, data); def appendRight(self, data): self.data.append(data); def calculateLeft(self): result = []; for item in (self.getLeft()): if (type(item) == type(self)): data = item.calculateLeft(); else: data = item; result.append(item); return result; def getLeft(self): return self.data; def getRight(self): data = self.data; data.reverse(); return data; tree2 = Tree("*"); tree2.appendRight(44); tree2.appendLeft(20); tree = Tree("+"); tree.appendRight(4); tree.appendLeft(10); tree.appendLeft(tree2); print(tree.calculateLeft()); It looks like tree2 and tree are sharing list "data"? At the moment I'd like it to output something like [[20,44], 10, 4], but when I tree.appendLeft(tree2) I get RuntimeError: maximum recursion depth exceeded, and when i even won't appendLeft(tree2) it outputs [10, 20, 44, 4] (!!!). What am I missing here? I'm using Portable Python 3.0.1. Thank you

    Read the article

  • Python DictReader - Skipping rows with missing columns?

    - by victorhooi
    heya, I have a Excel .CSV file I'm attempting to read in with DictReader. All seems to be well, except it seems to omit rows, specifically those with missing columns. Our input looks like: mail,givenName,sn,lorem,ipsum,dolor,telephoneNumber [email protected],ian,bay,3424,8403,2535,+65(2)34523534545 [email protected],mike,gibson,3424,8403,2535,+65(2)34523534545 [email protected],ross,martin,,,,+65(2)34523534545 [email protected],david,connor,,,,+65(2)34523534545 [email protected],chris,call,3424,8403,2535,+65(2)34523534545 So some of the rows have missing lorem/ipsum/dolor columns, and it's just a string of commas for those. We're reading it in with: def read_gd_dump(input_file="blah 20100423.csv"): gd_extract = csv.DictReader(open('blah 20100423.csv'), restval='missing', dialect='excel') return dict([(row['something'], row) for row in gd_extract]) And I checked that "something" (the key for our dict) isn't one of the missing columns, I had originally suspected it might be that. It's one of the columns after that. However, DictReader seems to completely skip over the rows. I tried setting restval to something, didn't seem to make any difference. I can't seem to find anything in Python's CSV docs (http://docs.python.org/library/csv.html) that would explain this behaviour, but I may have misread something. Any ideas? Thanks, Victor

    Read the article

  • Speed vs security vs compatibility over methods to do string concatenation in Python

    - by Cawas
    Similar questions have been brought (good speed comparison there) on this same subject. Hopefully this question is different and updated to Python 2.6 and 3.0. So far I believe the faster and most compatible method (among different Python versions) is the plain simple + sign: text = "whatever" + " you " + SAY But I keep hearing and reading it's not secure and / or advisable. I'm not even sure how many methods are there to manipulate strings! I could count only about 4: There's interpolation and all its sub-options such as % and format and then there's the simple ones, join and +. Finally, the new approach to string formatting, which is with format, is certainly not good for backwards compatibility at same time making % not good for forward compatibility. But should it be used for every string manipulation, including every concatenation, whenever we restrict ourselves to 3.x only? Well, maybe this is more of a wiki than a question, but I do wish to have an answer on which is the proper usage of each string manipulation method. And which one could be generally used with each focus in mind (best all around for compatibility, for speed and for security). Thanks.

    Read the article

  • spam and dirty words comment post filtering in python (django)

    - by sintaloo
    Hi All, My basic question is how to filter spam and dirty words in a comment post system under python (django). I have a collection of phrases (approximately 3000 phrases) to be filtered. Question (1), are there any existing open source python (or django) package/module/plugin which can handle this job? I knew there was one called Akismet. But from what I understood, it will not solve my problem. Akismet is just a web service and filter the words dictionary defined by Akismet. But I have my own collection of words. Please correct me if I am wrong. Question (2), If there is no such open source package I can use, how to create my own one? The only thing I can think of it's to use regular expression and join all the word phrases with 'or' in a regular expression. but I have 3000 phrases, I think it won't work in term of performance and filter every comment post. any suggestions where should I start from? Thank you very much for your help and time.

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • Ignore case in Python strings

    - by Paul Oyster
    What is the easiest way to compare strings in Python, ignoring case? Of course one can do (str1.lower() <= str2.lower()), etc., but this created two additional temporary strings (with the obvious alloc/g-c overheads). I guess I'm looking for an equivalent to C's stricmp(). [Some more context requested, so I'll demonstrate with a trivial example:] Suppose you want to sort a looong list of strings. You simply do theList.sort(). This is O(n * log(n)) string comparisons and no memory management (since all strings and list elements are some sort of smart pointers). You are happy. Now, you want to do the same, but ignore the case (let's simplify and say all strings are ascii, so locale issues can be ignored). You can do theList.sort(key=lambda s: s.lower()), but then you cause two new allocations per comparison, plus burden the garbage-collector with the duplicated (lowered) strings. Each such memory-management noise is orders-of-magnitude slower than simple string comparison. Now, with an in-place stricmp()-like function, you do: theList.sort(cmp=stricmp) and it is as fast and as memory-friendly as theList.sort(). You are happy again. The problem is any Python-based case-insensitive comparison involves implicit string duplications, so I was expecting to find a C-based comparisons (maybe in module string). Could not find anything like that, hence the question here. (Hope this clarifies the question).

    Read the article

  • Speedup writing C programs using a subset of the Python syntax

    - by psihodelia
    I am constantly trying to optimize my time. Writing a C code takes a lot of time and requires much more keyboard touches than say writing a Python program. However, in order to speed up the time required to create a C program, one can automatize many things. I'd like to write my programs using smth. like Python but with C semantics. It means, all keywords are C keywords, but syntax is optimized. For example, this C code: #include "dsplib.h" #include "coeffs.h" #define MODULENAME "dsplib" #define NUM_SAMPLES 320 typedef float t_Vec; typedef struct s_Inter { char *pc_Name; struct s_Inter *px_Next; }t_Inter; typedef struct s_DspLibControl { t_Vec f_Y; }t_DspLibControl; void v_DspLibName(void) { printf("Module: %s", MODULENAME); printf("\n"); } int v_DspLibInitInterControl(t_DspLibControl *px_Con) { int y; px_Con->f_Y = 0.0; for(int i=0;i<10;i++) { y += i * i; } return y; } in optimized pythonized version can look like: include dsplib, coeffs define MODULENAME="dsplib", NUM_SAMPLES=320 typedef float t_Vec typedef struct s_Inter: char *pc_Name struct s_Inter *px_Next t_Inter typedef struct s_DspLibControl: t_Vec f_Y t_DspLibControl v_DspLibName(): printf("Module: %s", MODULENAME); printf("\n") int v_DspLibInitInterControl(t_DspLibControl *px_Con): int y px_Con->f_Y = 0.0 for int i=0;i<10;i++: y += i * i return y My question is: Do you know any VIM script, which allows to translate an original pythonized C code into a standard C code? For example, one is writing a C code but uses pythonized syntax, once she decides to translate pythonized blocks into standard C, she selects such blocks and press some key. And she doesn't save such pythonized code of course, VIM translates it into standard C.

    Read the article

  • Magic Methods in Python

    - by dArignac
    Howdy, I'm kind of new to Python and I wonder if there is a way to create something like the magic methods in PHP (http://www.php.net/manual/en/language.oop5.overloading.php#language.oop5.overloading.methods) My aim is to ease the access of child classes in my model. I basically have a parent class that has n child classes. These classes have three values, a language key, a translation key and a translation value. The are describing a kind of generic translation handling. The parent class can have translations for different translation key each in different languages. E.g. the key "title" can be translated into german and english and the key "description" too (and so far and so on) I don't want to get the child classes and filter by the set values (at least I want but not explicitly, the concrete implementation behind the magic method would do this). I want to call parent_class.title['de'] # or also possible maybe parent_class.title('de') for getting the translation of title in german (de). So there has to be a magic method that takes the name of the called method and their params (as in PHP). As far as I dug into Python this is only possible with simple attributes (_getattr_, _setattr_) or with setting/getting directly within the class (_getitem_, _setitem_) which both do not fit my needs. Maybe there is a solution for this? Please help! Thanks in advance!

    Read the article

  • Python line file iteration and strange characters

    - by muckabout
    I have a huge gzipped text file which I need to read, line by line. I go with the following: for i, line in enumerate(codecs.getreader('utf-8')(gzip.open('file.gz'))): print i, line At some point late in the file, the python output diverges from the file. This is because lines are getting broken due to weird special characters that python thinks are newlines. When I open the file in 'vim', they are correct, but the suspect characters are formatted weirdly. Is there something I can do to fix this? I've tried other codecs including utf-16, latin-1. I've also tried with no codec. I looked at the file using 'od'. Sure enough, there are \n characters where they shouldn't be. But, the "wrong" ones are prepended by a weird character. I think there's some encoding here with some characters being 2-bytes, but the trailing byte being a \n if not viewed properly. If I replace: gzip.open('file.gz') With: os.popen('zcat file.gz') It works fine (and actually, quite faster). But, I'd like to know where I'm going wrong.

    Read the article

  • Coding the Python way

    - by Aaron Moodie
    I've just spent the last half semester at Uni learning python. I've really enjoyed it, and was hoping for a few tips on how to write more 'pythonic' code. This is the __init__ class from a recent assignment I did. At the time I wrote it, I was trying to work out how I could re-write this using lambdas, or in a neater, more efficient way, but ran out of time. def __init__(self, dir): def _read_files(_, dir, files): for file in files: if file == "classes.txt": class_list = readtable(dir+"/"+file) for item in class_list: Enrol.class_info_dict[item[0]] = item[1:] if item[1] in Enrol.classes_dict: Enrol.classes_dict[item[1]].append(item[0]) else: Enrol.classes_dict[item[1]] = [item[0]] elif file == "subjects.txt": subject_list = readtable(dir+"/"+file) for item in subject_list: Enrol.subjects_dict[item[0]] = item[1] elif file == "venues.txt": venue_list = readtable(dir+"/"+file) for item in venue_list: Enrol.venues_dict[item[0]] = item[1:] elif file.endswith('.roll'): roll_list = readlines(dir+"/"+file) file = os.path.splitext(file)[0] Enrol.class_roll_dict[file] = roll_list for item in roll_list: if item in Enrol.enrolled_dict: Enrol.enrolled_dict[item].append(file) else: Enrol.enrolled_dict[item] = [file] try: os.path.walk(dir, _read_files, None) except: print "There was a problem reading the directory" As you can see, it's a little bulky. If anyone has the time or inclination, I'd really appreciate a few tips on some python best-practices. Thanks.

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • Downloading a picture via urllib and python.

    - by Mike
    So I'm trying to make a Python script that downloads webcomics and puts them in a folder on my desktop. I've found a few similar programs on here that do something similar, but nothing quite like what I need. The one that I found most similar is right here (http://bytes.com/topic/python/answers/850927-problem-using-urllib-download-images). I tried using this code: >>> import urllib >>> image = urllib.URLopener() >>> image.retrieve("http://www.gunnerkrigg.com//comics/00000001.jpg","00000001.jpg") ('00000001.jpg', <httplib.HTTPMessage instance at 0x1457a80>) I then searched my computer for a file "00000001.jpg", but all I found was the cached picture of it. I'm not even sure it saved the file to my computer. Once I understand how to get the file downloaded, I think I know how to handle the rest. Essentially just use a for loop and split the string at the '00000000'.'jpg' and increment the '00000000' up to the largest number, which I would have to somehow determine. Any reccomendations on the best way to do this or how to download the file correctly? Thanks!

    Read the article

  • Python Ephem / Datetime calculation

    - by dassouki
    the output should process the first date as "day" and second as "night". I've been playing with this for a few hours now and can't figure out what I'm doing wrong. Any ideas? Edit I assume that the problem is due to my date comparison implementation Output: $ python time_of_day.py * should be day: event date: 2010/4/6 16:00:59 prev rising: 2010/4/6 09:24:24 prev setting: 2010/4/5 23:33:03 next rise: 2010/4/7 09:22:27 next set: 2010/4/6 23:34:27 day * should be night: event date: 2010/4/6 00:01:00 prev rising: 2010/4/5 09:26:22 prev setting: 2010/4/5 23:33:03 next rise: 2010/4/6 09:24:24 next set: 2010/4/6 23:34:27 day time_of_day.py import datetime import ephem # install from http://pypi.python.org/pypi/pyephem/ #event_time is just a date time corresponding to an sql timestamp def type_of_light(latitude, longitude, event_time, utc_time, horizon): o = ephem.Observer() o.lat, o.long, o.date, o.horizon = latitude, longitude, event_time, horizon print "event date ", o.date print "prev rising: ", o.previous_rising(ephem.Sun()) print "prev setting: ", o.previous_setting(ephem.Sun()) print "next rise: ", o.next_rising(ephem.Sun()) print "next set: ", o.next_setting(ephem.Sun()) if o.previous_rising(ephem.Sun()) <= o.date <= o.next_setting(ephem.Sun()): return "day" elif o.previous_setting(ephem.Sun()) <= o.date <= o.next_rising(ephem.Sun()): return "night" else: return "error" print "should be day: ", type_of_light('45.959','-66.6405','2010/4/6 16:01','-4', '-6') print "should be night: ", type_of_light('45.959','-66.6405','2010/4/6 00:01','-4', '-6')

    Read the article

  • Python raw strings and trailing back slashes.

    - by dash-tom-bang
    I ran across something once upon a time and wondered if it was a Python "bug" or at least a misfeature. I'm curious if anyone knows of any justifications for this behavior. I thought of it just now reading "Code Like a Pythonista," which has been enjoyable so far. I'm only familiar with the 2.x line of Python. Raw strings are strings that are prefixed with an r. This is great because I can use backslashes in regular expressions and I don't need to double everything everywhere. It's also handy for writing throwaway scripts on Windows, so I can use backslashes there also. (I know I can also use forward slashes, but throwaway scripts often contain content cut&pasted from elsewhere in Windows.) So great! Unless, of course, you really want your string to end with a backslash. There's no way to do that in a 'raw' string. In [9]: r'\n' Out[9]: '\\n' In [10]: r'abc\n' Out[10]: 'abc\\n' In [11]: r'abc\' ------------------------------------------------ File "<ipython console>", line 1 r'abc\' ^ SyntaxError: EOL while scanning string literal In [12]: r'abc\\' Out[12]: 'abc\\\\' So one slash before the closing quote is an error, but two slashes gives you two slashes! Certainly I'm not the only one that is bothered by this? Thoughts on why 'raw' strings are 'raw, except for slash-quote'? I mean, if I wanted to embed a single quote in there I'd just use double quotes around the string, and vice versa. If I wanted both, I'd just triple quote. If I really wanted three quotes in a row in a raw string, well, I guess I'd have to deal, but is this considered "proper behavior"?

    Read the article

  • Getting registry information using Python

    - by Willy
    I am trying to pull registry info from many servers and put them all into one txt file. I got the code working fine in a .bat file. I hear that there is a way simpler way to do this in Python. I am intrigued and delighted to hear this. Can anyone help finish my code: My working bat file: echo rfsqlcl01app >> foo.txt reg query "\\rfsqlcl01app\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo GLADGSQL01 >> foo.txt reg query "\\GLADGSQL01\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo GLADGWEB01 >> foo.txt reg query "\\GLADGWEB01\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo PAPERVISION >> foo.txt My python code structure: >>> server_list = open('server_test.txt', 'r') >>> for line in server_list: print r'reg query \\%s\blah\blah\blah' % line.strip() reg query \\foo\blah\blah\blah reg query \\moo\blah\blah\blah reg query \\boo\blah\blah\blah >>> server_list.close()

    Read the article

  • Python - Checking for membership inside nested dict

    - by victorhooi
    heya, This is a followup questions to this one: http://stackoverflow.com/questions/2901422/python-dictreader-skipping-rows-with-missing-columns Turns out I was being silly, and using the wrong ID field. I'm using Python 3.x here. I have a dict of employees, indexed by a string, "directory_id". Each value is a nested dict with employee attributes (phone number, surname etc.). One of these values is a secondary ID, say "internal_id", and another is their manager, call it "manager_internal_id". The "internal_id" field is non-mandatory, and not every employee has one. (I've simplified the fields a little, both to make it easier to read, and also for privacy/compliance reasons). The issue here is that we index (key) each employee by their directory_id, but when we lookup their manager, we need to find managers by their "internal_id". Before, when employee.keys() was a list of internal_ids, I was using a membership check on this. Now, the last part of my if statement won't work, since the internal_ids is part of the dict values, instead of the key itself. def lookup_supervisor(manager_internal_id, employees): if manager_internal_idis not None and manager_internal_id!= "" and manager_internal_id in employees.keys(): return (employees[manager_internal_id]['mail'], employees[manager_internal_id]['givenName'], employees[manager_internal_id]['sn']) else: return ('Supervisor Not Found', 'Supervisor Not Found', 'Supervisor Not Found') So the first question is, how do I check whether the manager_internal_id is present in the dict's values. I've tried substituting employee.keys() with employee.values(), that didn't work. Also, I'm hoping for something a little more efficient, not sure if there's a way to get a subset of the values, specifically, all the entries for employees[directory_id]['internal_id']. Hopefully there's some Pythonic way of doing this, without using a massive heap of nested for/if loops. My second question is, how do I then cleanly return the required employee attributes (mail, givenname, surname etc.). My for loop is iterating over each employee, and calling lookup_supervisor. I'm feeling a bit stupid/stumped here. def tidy_data(employees): for directory_id, data in employees.items(): # We really shouldnt' be passing employees back and forth like this - hmm, classes? data['SupervisorEmail'], data['SupervisorFirstName'], data['SupervisorSurname'] = lookup_supervisor(data['manager_internal_id'], employees) Thanks in advance =), Victor

    Read the article

  • using an alternative string quotation syntax in python

    - by Cawas
    Just wondering... I find using escape characters too distracting. I'd rather do something like this: print ^'Let's begin and end with sets of unlikely 2 chars and bingo!'^ Let's begin and end with sets of unlikely 2 chars and bingo! Note the ' inside the string, and how this syntax would have no issue with it, or whatever else inside for basically all cases. Too bad markdown can't properly colorize it (yet), so I decided to <pre> it. Sure, the ^ could be any other char, I'm not sure what would look/work better. That sounds good enough to me, tho. Probably some other language already have a similar solution. And, just maybe, Python already have such a feature and I overlooked it. I hope this is the case. But if it isn't, would it be too hard to, somehow, change Python's interpreter and be able to select an arbitrary (or even standardized) syntax for notating the strings? I realize there are many ways to change statements and the whole syntax in general by using pre-compilators, but this is far more specific. And going any of those routes is what I call "too hard". I'm not really needing to do this so, again, I'm just wondering.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >