Search Results

Search found 13956 results on 559 pages for 'python memcached'.

Page 70/559 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Why the "mutable default argument fix" syntax is so ugly, asks python newbie

    - by Cawas
    Now following my series of "python newbie questions" and based on another question. Go to http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables and scroll down to "Default Parameter Values". There you can find the following: def bad_append(new_item, a_list=[]): a_list.append(new_item) return a_list def good_append(new_item, a_list=None): if a_list is None: a_list = [] a_list.append(new_item) return a_list So, question here is: why is the "good" syntax over a known issue ugly like that in a programming language that promotes "elegant syntax" and "easy-to-use"? Why not just something in the definition itself, that the "argument" name is attached to a "localized" mutable object like: def better_append(new_item, a_list=[].local): a_list.append(new_item) return a_list I'm sure there would be a better way to do this syntax, but I'm also almost positive there's a good reason to why it hasn't been done. So, anyone happens to know why?

    Read the article

  • Python How to make a cross-module function?

    - by Evan
    I want to be able to call a global function from an imported class, for example In file PetStore.py class AnimalSound(object): def __init__(self): if 'makenoise' in globals(): self.makenoise = globals()['makenoise'] else: self.makenoise = lambda: 'meow' def __str__(self): return self.makenoise() Then when I test in the Python Interpreter >>> def makenoise(): ... return 'bark' ... >>> from PetStore import AnimalSound >>> sound = AnimalSound() >>> sound.makenoise() 'meow' I get a 'meow' instead of 'bark'. I have tried using the solutions provided in python-how-to-make-a-cross-module-variable with no luck.

    Read the article

  • what is the proper way to do logging in csv file?

    - by user2003548
    i want to log some information of every single request send to a busy http server in a formatted form,use log module would create some thing i don't want to: [I 131104 15:31:29 Sys:34] i think of csv format but i don't know how to customize it,and python got csv module,but read the manual import csv with open('some.csv', 'w', newline='') as f: writer = csv.writer(f) writer.writerows(someiterable) since it would open and close a file each time, i am afraid in this way would slow down the whole server performance, what could i do?

    Read the article

  • Multiple classes in a Python module

    - by ralphL
    I'm very new to Python (I'm coming from a JAVA background) and I'm wondering if anyone could help me with some of the Python standards. Is it a normal or "proper" practice to put multiple class in a module? I have been working with Django and started with the tutorials and they place their database model classes in the same module. Is this something that is normally done or should I stick with 1 class per module? Is their a reason I would do one over the other? Hope I'm being clear and not to generic. Thanks to everyone in advance!

    Read the article

  • Python urllib3 and how to handle cookie support?

    - by bigredbob
    So I'm looking into urllib3 because it has connection pooling and is thread safe (so performance is better, especially for crawling), but the documentation is... minimal to say the least. urllib2 has build_opener so something like: #!/usr/bin/python import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://example.com/") But urllib3 has no build_opener method, so the only way I have figured out so far is to manually put it in the header: #!/usr/bin/python import urllib3 http_pool = urllib3.connection_from_url("http://example.com") myheaders = {'Cookie':'some cookie data'} r = http_pool.get_url("http://example.org/", headers=myheaders) But I am hoping there is a better way and that one of you can tell me what it is. Also can someone tag this with "urllib3" please.

    Read the article

  • python interactive web data/forms/interface communicating with remote server

    - by decipher
    What's an efficient method (preferably simple as well) for communicating with a remote server and allowing the user to 'interact' with it (IE submit commands, user interface) via the web browser (IE a text box to input commands, and an text area for output, or various command-less abstracted interfaces)? I have the 'standalone' python code finished for communicating and working(terminal/console based right now). My primary concern is with re-factoring the code to suite the web, which involves establishing a connection (python sockets), and maintaining the connection while the user is logged on. some further details: currently using django framework for the basic back end/templates.

    Read the article

  • CGI, python, and setgid

    - by user331398
    I'm running a compiled python cgi script (using cxfreeze) in Apache. The script, among other things, calls os.setuid(some_uid) os.setgid(some_gid) Obviously some_uid/gid are legal and I set the sticky bit for both user and group, and verified it is indeed set. However on every call i get an error os.setgid(int(self.gid)) OSError: [Errno 1] Operation not permitted As you may notice, setuid() is successful, setgid is not. Which is very weird, at least for me, though I admit I have little experience with permissions in Linux. Any thoughts/ideas are welcome. I'm using apache 2.2.15, python 2.6.5, RHEL 5.4 (kernel 2.6.18) Thank you

    Read the article

  • OptionParser python module - multiple entries of same variable?

    - by jduncan
    I'm writing a little python script to get stats from several servers or a single server, and I'm using OptionParser to parse the command line input. #!/usr/bin/python import sys from optparse import OptionParser ... parser.add_option("-s", "--server", dest="server", metavar="SERVER", type="string", help="server(s) to gather stats [default: localhost]") ... my GOAL is to be able to do something like #test.py -s server1 -s server2 and it would append both of those values within the options.server object in some way so that I could iterate through them, whether they have 1 value or 10. Any thoughts / help is appreciated. Thanks.

    Read the article

  • Repeatedly querying xml using python

    - by Jack
    I have some xml documents I need to run queries on. I've created some python scripts (using ElementTree) to do this, since I'm vaguely familiar with using it. The way it works is I run the scripts several times with different arguments, depending on what I want to find out. These files can be relatively large (10MB+) and so it takes rather a long time to parse them. On my system, just running: tree = ElementTree.parse(document) takes around 30 seconds, with a subsequent findall query only adding around a second to that. Seeing as the way I'm doing this requires me to repeatedly parse the file, I was wondering if there was some sort of caching mechanism I can use so that the ElementTree.parse computation can be reduced on subsequent queries. I realise the smart thing to do here may be to try and batch as many queries as possible together in the python script, but I was hoping there might be another way. Thanks.

    Read the article

  • Are Python properties broken?

    - by jacob
    How can it be that this test case import unittest class PropTest(unittest.TestCase): def test(self): class C(): val = 'initial val' def get_p(self): return self.val def set_p(self, prop): if prop == 'legal val': self.val = prop prop=property(fget=get_p, fset=set_p) c=C() self.assertEqual('initial val', c.prop) c.prop='legal val' self.assertEqual('legal val', c.prop) c.prop='illegal val' self.assertNotEqual('illegal val', c.prop) fails as below? Failure Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Users/jacob/aau/admissions_proj/admissions/plain_old_unit_tests.py", line 24, in test self.assertNotEqual('illegal val', c.prop) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/unittest.py", line 358, in failIfEqual (msg or '%r == %r' % (first, second)) AssertionError: 'illegal val' == 'illegal val'

    Read the article

  • The "correct" way to define an exception in Python without PyLint complaining

    - by Evgeny
    I'm trying to define my own (very simple) exception class in Python 2.6, but no matter how I do it I get some warning. First, the simplest way: class MyException(Exception): pass This works, but prints out a warning at runtime: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 OK, so that's not the way. I then tried: class MyException(Exception): def __init__(self, message): self.message = message This also works, but PyLint reports a warning: W0231: MyException.__init__: __init__ method from base class 'Exception' is not called. So I tried calling it: class MyException(Exception): def __init__(self, message): super(Exception, self).__init__(message) self.message = message This works, too! But now PyLint reports an error: E1003: MyException.__init__: Bad first argument 'Exception' given to super class How the hell do I do such a simple thing without any warnings?

    Read the article

  • Monitor and Terminate Python script based on system resource use

    - by Vincent
    What is the "right" or "best" way to monitor the system resources a python script is using and terminate it if the resource use exceeds some predetermined values. In my case memory usage is of concern. I am not asking how to measure the system resource use although I am open to suggestions. As a simple example, let's assume I have a function that finds prime numbers less than some large number and adds them to a list based on some condition. I don't know ahead of time how many prime numbers will satisfy the condition so I what to be sure to terminate the function if I use up to much system memory (8gb lets say). I know that there are ways to monitor the size of python objects. What I don't know is the proper way to monitor the size of the list and exit is to just include a size test in the prime function loop and exit if it exceeds 8gb or if there is an "external" way to monitor and exit.

    Read the article

  • Recommended Python publish/subscribe/dispatch module ?

    - by Eli Bendersky
    From PyPubSub: Pypubsub provides a simple way for your Python application to decouple its components: parts of your application can publish messages (with or without data) and other parts can subscribe/receive them. This allows message "senders" and message "listeners" to be unaware of each other: one doesn't need to import the other a sender doesn't need to know "who" gets the messages, what the listeners will do with the data, or even if any listener will get the message data. similarly, listeners don't need to worry about where messages come from. This is a great tool for implementing a Model-View-Controller architecture or any similar architecture that promotes decoupling of its components. There seem to be quite a few Python modules for publishing/subscribing floating around the web, from PyPubSub, to PyDispatcher to simple "home-cooked" classes. Can you recommend a module that works well in most cases ? Which modules have you had positive experience with ? Negative ? Thanks in advance

    Read the article

  • Python/C "defs" file - what is it?

    - by detly
    In the nautilus-python bindings, there is a file "nautilus.defs". It contains stanzas like (define-interface MenuProvider (in-module "Nautilus") (c-name "NautilusMenuProvider") (gtype-id "NAUTILUS_TYPE_MENU_PROVIDER") ) or (define-method get_mime_type (of-object "NautilusFileInfo") (c-name "nautilus_file_info_get_mime_type") (return-type "char*") ) Now I can see what most of these do (eg. that last one means that I can call the method "get_mime_type" on a "FileInfo" object). But I'd like to know: what is this file, exactly (ie. what do I search the web for to find out more info)? Is it a common thing to find in Python/C bindings? What is the format, and where is it documented? What program actually processes it? (So far, I've managed to glean that it gets transformed into a C source file, and it looks a bit like lisp to me.)

    Read the article

  • GIS: line_locate_point() in Python

    - by miracle2k
    I'm pretty much a beginner when it comes to GIS, but I think I understand the basics - it doesn't seem to hard. But: All these acronyms and different libraries, GEOS, GDAL, PROJ, PCL, Shaply, OpenGEO, OGR, OGC, OWS and what not, each seemingly depending on any number of others, is slightly overwhelming me. Here's what I would like to do: Given a number of points and a linestring, I want to determine the location on the line closest to a certain point. In other words, what PostGIS's line_locate_point() does: http://postgis.refractions.net/documentation/manual-1.3/ch06.html#line%5Flocate%5Fpoint Except I want do use plain Python. Which library or libraries should I have a look at generally for doing these kinds of spatial calculations in Python, and is there one that specifically supports a line_locate_point() equivalent?

    Read the article

  • Reading a Delphi binary file in Python

    - by Brendan
    I have a file that was written with the following Delphi declaration ... Type Tfulldata = Record dpoints, dloops : integer; dtime, bT, sT, hI, LI : real; tm : real; data : array[1..armax] Of Real; End; ... Var: fh: File Of Tfulldata; I want to analyse the data in the files (many MB in size) using Python if possible - is there an easy way to read in the data and cast the data into Python objects similar in form to the Delphi records? Does anyone know of a library perhaps that does this?

    Read the article

  • What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?

    - by Ivan
    Crterias of 'better': fast im math and simple (little of fields, many records) db transactions, convenient to develop/read/extend, flexible, connectible. The task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selectint/inserting sets of floats and dong maths with rhem). The choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, JavaScript (node.js). All the data is to be stored in a relational database (due to its heavily multidimensional nature), all the communication with outer world is to be done by means of web services.

    Read the article

  • Python, how to tell if screen is running.

    - by Joe Spoon
    Hello, I am very new to programming and am trying to run a python code to see if the screen program is running and if it is then to not run the rest of the code. This is what I have and it's not working. !/usr/bin/python import os var1 = os.system ('screen -r /root/screenlog/screen.log') fd = open("/root/screenlog/screen.log") content = fd.readline() while content: if content == "There is no screen to be resumed.": os.system ('/etc/init.d/tunnel.sh') print "The tunnel is now active." else: print "The tunnel is running." fd.close() I know there are probably several things here that don't need to be and quite a few that I'm missing. I will be running this program in cron. Thanks for any help.

    Read the article

  • Hashing a python method to regenerate output when method is modified

    - by Seth Johnson
    I have a python method that has a deterministic result. It takes a long time to run and generates a large output: def time_consuming_method(): # lots_of_computing_time to come up with the_result return the_result I modify time_consuming_method from time to time, but I would like to avoid having it run again while it's unchanged. [Time_consuming_method only depends on functions that are immutable for the purposes considered here; i.e. it might have functions from Python libraries but not from other pieces of my code that I'd change.] The solution that suggests itself to me is to cache the output and also cache some "hash" of the function. If the hash changes, the function will have been modified, and we have to re-generate the output. Is this possible or a ridiculous idea? If this isn't a terrible idea, is the best implementation to write f = """ def ridiculous_method(): a = # # lots_of_computing_time return a """ , use the hashlib module to compute a hash for f, and use compile or eval to run it as code?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >