Search Results

Search found 13542 results on 542 pages for 'python socketserver'.

Page 365/542 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • How do I use Django to insert a Geometry Field into the database?

    - by alex
    class LocationLog(models.Model): user = models.ForeignKey(User) utm = models.GeometryField(spatial_index=True) This is my database model. I would like to insert a row. I want to insert a circle at point -55, 333. With a radius of 10. How can I put this circle into the geometry field? Of course, then I would want to check which circles overlap a given circle. (my select statement)

    Read the article

  • How do I store multiple copies of the same field in Django?

    - by Alistair
    I'm storing OLAC metadata which describes linguistic resources. Many of the elements of the metadata are repeatable -- for example, a resource can have two languages, three authors and four dates associated with it. Is there any way of storing this in one model? It seems like overkill to define a model for each repeatable metadata element -- especially since the models will only have one field: it's value.

    Read the article

  • Import boto from local library

    - by ensnare
    I'm trying to use boto as a downloaded library, rather than installing it globally on my machine. I'm able to import boto, but when I run boto.connect_dynamodb() I get an error: ImportError: No module named dynamodb.layer2 Here's my file structure: project/ project/ __init__.py libraries/ __init__.py flask/ boto/ views/ .... modules/ __init__.py db.py .... templates/ .... static/ .... runserver.py And the contents of the relevant files as follows: project/project/modules/db.py from project.libraries import boto conn = boto.connect_dynamodb( aws_access_key_id='<YOUR_AWS_KEY_ID>', aws_secret_access_key='<YOUR_AWS_SECRET_KEY>') What am I doing wrong? Thanks in advance.

    Read the article

  • Condition checking vs. Exception handling

    - by Aidas Bendoraitis
    When is exception handling more preferable than condition checking? There are many situations where I can choose using one or the other. For example, this is a summing function which uses a custom exception: # module mylibrary class WrongSummand(Exception): pass def sum_(a, b): """ returns the sum of two summands of the same type """ if type(a) != type(b): raise WrongSummand("given arguments are not of the same type") return a + b # module application using mylibrary from mylibrary import sum_, WrongSummand try: print sum_("A", 5) except WrongSummand: print "wrong arguments" And this is the same function, which avoids using exceptions # module mylibrary def sum_(a, b): """ returns the sum of two summands if they are both of the same type """ if type(a) == type(b): return a + b # module application using mylibrary from mylibrary import sum_ c = sum_("A", 5) if c is not None: print c else: print "wrong arguments" I think that using conditions is always more readable and manageable. Or am I wrong? What are the proper cases for defining APIs which raise exceptions and why?

    Read the article

  • Duplicate django query set?

    - by Piotr Czapla
    I have a simple django's query set like: qs = AModel.objects.exclude(state="F").order_by("order") I'd like to use it as follows: qs[0:3].update(state='F') expected = qs[3] # throws error here But last statement throws: "Cannot update a query once a slice has been taken." How can I duplicate the query set?

    Read the article

  • Twisted - how to create multi protocol process and send the data between the protocols

    - by SpankMe
    Hey, Im trying to write a program that would be listening for data (simple text messages) on some port (say tcp 6666) and then pass them to one or more different protocols - irc, xmpp and so on. I've tried many approaches and digged the Internet, but I cant find easy and working solution for such task. The code I am currently fighting with is here: http://pastebin.com/ri7caXih I would like to know how to from object like: ircf = ircFactory('asdfasdf', '#asdf666') get access to self protocol methods, because this: self.protocol.dupa1(msg) returns error about self not being passed to active protocol object. Or maybe there is other, better, easier and more kosher way to create single reactor with multiple protocols and have actions triggeres when a message arrives on any of them, and then pass that message to other protocols for handling/processing/sending? Any help will be highly appreciated!

    Read the article

  • SelfReferenceProperty vs. ListProperty Google App Engine

    - by John
    Hi All, I am experimenting with the Google App Engine and have a question. For the sake of simplicity, let's say my app is modeling a computer network (a fairly large corporate network with 10,000 nodes). I am trying to model my Node class as follows: class Node(db.Model): name = db.StringProperty() neighbors = db.SelfReferenceProperty() Let's suppose, for a minute, that I cannot use a ListProperty(). Based on my experiments to date, I can assign only a single entity to 'neighbors' - and I cannot use the "virtual" collection (node_set) to access the list of Node neighbors. So... my questions are: Does SelfReferenceProperty limit you to a single entity that you can reference? If I instead use a ListProperty, I believe I am limited to 5,000 keys, which I need to exceed. Thoughts? Thanks, John

    Read the article

  • Is using os.path.abspath to validate an untrusted filename's location secure?

    - by mcmt
    I don't think I'm missing anything. Then again I'm kind of a newbie. def GET(self, filename): name = urllib.unquote(filename) full = path.abspath(path.join(STATIC_PATH, filename)) #Make sure request is not tricksy and tries to get out of #the directory, e.g. filename = "../.ssh/id_rsa". GET OUTTA HERE assert full[:len(STATIC_PATH)] == STATIC_PATH, "bad path" return open(full).read() Edit: I realize this will return the wrong HTTP error code if the file doesn't exist (at least under web.py). I will fix this.

    Read the article

  • Find subset with K elements that are closest to eachother

    - by Nima
    Given an array of integers size N, how can you efficiently find a subset of size K with elements that are closest to each other? Let the closeness for a subset (x1,x2,x3,..xk) be defined as: 2 <= N <= 10^5 2 <= K <= N constraints: Array may contain duplicates and is not guaranteed to be sorted. My brute force solution is very slow for large N, and it doesn't check if there's more than 1 solution: N = input() K = input() assert 2 <= N <= 10**5 assert 2 <= K <= N a = [] for i in xrange(0, N): a.append(input()) a.sort() minimum = sys.maxint startindex = 0 for i in xrange(0,N-K+1): last = i + K tmp = 0 for j in xrange(i, last): for l in xrange(j+1, last): tmp += abs(a[j]-a[l]) if(tmp > minimum): break if(tmp < minimum): minimum = tmp startindex = i #end index = startindex + K? Examples: N = 7 K = 3 array = [10,100,300,200,1000,20,30] result = [10,20,30] N = 10 K = 4 array = [1,2,3,4,10,20,30,40,100,200] result = [1,2,3,4]

    Read the article

  • how can i set the key 'blob-key' about BlobStore?

    - by pyleaf
    I use the jquery plugin "uploadify" to upload multiple files to My App(GAE), and then save them with blobstore, but it failed. I debug the code into get_uploads, it seems field.type_options is empty and of course has 'blob-key'. Q: where does the key 'blob-key' come from? thank you!

    Read the article

  • Generating two thumbnails from the same image in Django

    - by Titus
    Hello, this seems like quite an easy problem but I can't figure out what is going on here. Basically, what I'd like to do is create two different thumbnails from one image on a Django model. What ends up happening is that it seems to be looping and recreating the same image (while appending an underscore to it each time) until it throws up an error that the filename is to big. So, you end up something like: OSError: [Errno 36] File name too long: 'someimg________________etc.jpg' Here is the code: def save(self, *args, **kwargs): if self.image: iname = os.path.split(self.image.name)[-1] fname, ext = os.path.splitext(iname) tlname, tsname = fname + '_thumb_l' + ext, fname + '_thumb_s' + ext self.thumb_large.save(tlname, make_thumb(self.image, size=(250,250))) self.thumb_small.save(tsname, make_thumb(self.image, size=(100,100))) super(Artist, self).save(*args, **kwargs) def make_thumb(infile, size=(100,100)): infile.seek(0) image = Image.open(infile) if image.mode not in ('L', 'RGB'): image.convert('RGB') image.thumbnail(size, Image.ANTIALIAS) temp = StringIO() image.save(temp, 'png') return ContentFile(temp.getvalue()) I didn't show imports for the sake of brevity. Assume there are two ImageFields on the Artist model: thumb_large, and thumb_small. If this isn't the correct way to do it, I'd appreciate any feedback. Thanks!

    Read the article

  • Unable to control requests for static files on Google App Engine

    - by dan
    My simple GAE app is not redirecting to the /static directory for requests when url is multiple levels. Dir structure: /app/static/css/main.css App: I have two handlers one for /app and one for /app/new app.yaml: handlers: - url: /static static_dir: static - url: /app/static/(.*) static_dir: static\1 - url: /app/.* script: app.py login: required HTML: Description: When page is loaded from /app HTTP request for main.css is successful GET /static/css/main.css But when page is loaded from /app/new I see the following request: GET /app/static/css/main.cs That's when I tried adding the /app/static/(.*) in the app.yaml but it is not having any effect.

    Read the article

  • How to create instances of related models in Django

    - by sevennineteen
    I'm working on a CMSy app for which I've implemented a set of models which allow for creation of custom Template instances, made up of a number of Fields and tied to a specific Customer. The end-goal is that one or more templates with a set of custom fields can be defined through the Admin interface and associated to a customer, so that customer can then create content objects in the format prescribed by the template. I seem to have gotten this hooked up such that I can create any number of Template objects, but I'm struggling with how to create instances - actual content objects - in those templates. For example, I can define a template "Basic Page" for customer "Acme" which has the fields "Title" and "Body", but I haven't figured out how to create Basic Page instances where these fields can be filled in. Here are my (somewhat elided) models... class Customer(models.Model): ... class Field(models.Model): ... class Template(models.Model): label = models.CharField(max_length=255) clients = models.ManyToManyField(Customer, blank=True) fields = models.ManyToManyField(Field, blank=True) class ContentObject(models.Model): label = models.CharField(max_length=255) template = models.ForeignKey(Template) author = models.ForeignKey(User) customer = models.ForeignKey(Customer) mod_date = models.DateTimeField('Modified Date', editable=False) def __unicode__(self): return '%s (%s)' % (self.label, self.template) def save(self): self.mod_date = datetime.datetime.now() super(ContentObject, self).save() Thanks in advance for any advice!

    Read the article

  • Serve external template in Django

    - by AlexeyMK
    Hey, I want to do something like return render_to_response("http://docs.google.com/View?id=bla", args) and serve an external page with django arguments. Django doesn't like this (it looks for templates in very particular places). What's the easiest way make this work? Right now I'm thinking to use urllib to save the page to somewhere locally on my server and then serve with the templates pointing to there. Note: I'm not looking for anything particularly scalable here, I realize my proposal above is a little dirty.

    Read the article

  • Deterministic key serialization

    - by Mike Boers
    I'm writing a mapping class which uses SQLite as the storage backend. I am currently allowing only basestring keys but it would be nice if I could use a couple more types hopefully up to anything that is hashable (ie. same requirements as the builtin dict). To that end I would like to derive a deterministic serialization scheme. Ideally, I would like to know if any implementation/protocol combination of pickle is deterministic for hashable objects (e.g. can only use cPickle with protocol 0). I noticed that pickle and cPickle do not match: >>> import pickle >>> import cPickle >>> def dumps(x): ... print repr(pickle.dumps(x)) ... print repr(cPickle.dumps(x)) ... >>> dumps(1) 'I1\n.' 'I1\n.' >>> dumps('hello') "S'hello'\np0\n." "S'hello'\np1\n." >>> dumps((1, 2, 'hello')) "(I1\nI2\nS'hello'\np0\ntp1\n." "(I1\nI2\nS'hello'\np1\ntp2\n." Another option is to use repr to dump and ast.literal_eval to load. This would only be valid for builtin hashable types. I have written a function to determine if a given key would survive this process (it is rather conservative on the types it allows): def is_reprable_key(key): return type(key) in (int, str, unicode) or (type(key) == tuple and all( is_reprable_key(x) for x in key)) The question for this method is if repr itself is deterministic for the types that I have allowed here. I believe this would not survive the 2/3 version barrier due to the change in str/unicode literals. This also would not work for integers where 2**32 - 1 < x < 2**64 jumping between 32 and 64 bit platforms. Are there any other conditions (ie. do strings serialize differently under different conditions)? (If this all fails miserably then I can store the hash of the key along with the pickle of both the key and value, then iterate across rows that have a matching hash looking for one that unpickles to the expected key, but that really does complicate a few other things and I would rather not do it.) Any insights?

    Read the article

  • How to add a context processor from a Django app

    - by Edan Maor
    Say I'm writing a Django app, and all the templates in the app require a certain variable. The "classic" way to deal with this, afaik, is to write a context processor and add it to TEMPLATE_CONTEXT_PROCESSORS in the settings.py. My question is, is this the right way to do it, considering that apps are supposed to be "independent" from the actual project using them? In other words, when deploying that app to a new project, is there any way to avoid the project having to explicitly mess around with its settings?

    Read the article

  • How to make scipy.interpolate give a an extrapolated result beyond the input range?

    - by Salim Fadhley
    I'm trying to port a program which uses a hand-rolled interpolator (developed by a mathematitian colleage) over to use the interpolators provided by scipy. I'd like to use or wrap the scipy interpolator so that it has as close as possible behavior to the old interpolator. A key difference between the two functions is that in our original interpolator - if the input value is above or below the input range, our original interpolator will extrapolate the result. If you try this with the scipy interpolator it raises a ValueError. Consider this program as an example: import numpy as np from scipy import interpolate x = np.arange(0,10) y = np.exp(-x/3.0) f = interpolate.interp1d(x, y) print f(9) print f(11) # Causes ValueError, because it's greater than max(x) Is there a sensible way to make it so that instead of crashing, the final line will simply do a linear extrapolate, continuing the gradients defined by the first and last two pouints to infinity. Note, that in the real software I'm not actually using the exp function - that's here for illustration only!

    Read the article

  • How small is *too small* for an opensource project?

    - by Adam Lewis
    I have a fair number of smaller projects / libraries that I have been using over the past 2 years. I am thinking about moving them to Google Code to make it easier to share with co-workers and easier to import them into new projects on my own environments. The are things like a simple FSMs, CAN (Controller Area Network) drivers, and GPIB drivers. Most of them are small (less than 500 lines), so it makes me wonder are these types of things too small for a stand alone open-source project? Note that I would like to make it opensource because it does not give me, or my company, any real advantage.

    Read the article

  • How can I draw a log-normalized imshow plot with a colorbar representing the raw data in matplotlib

    - by Adam Fraser
    I'm using matplotlib to plot log-normalized images but I would like the original raw image data to be represented in the colorbar rather than the [0-1] interval. I get the feeling there's a more matplotlib'y way of doing this by using some sort of normalization object and not transforming the data beforehand... in any case, there could be negative values in the raw image. import matplotlib.pyplot as plt import numpy as np def log_transform(im): '''returns log(image) scaled to the interval [0,1]''' try: (min, max) = (im[im > 0].min(), im.max()) if (max > min) and (max > 0): return (np.log(im.clip(min, max)) - np.log(min)) / (np.log(max) - np.log(min)) except: pass return im a = np.ones((100,100)) for i in range(100): a[i] = i f = plt.figure() ax = f.add_subplot(111) res = ax.imshow(log_transform(a)) # the colorbar drawn shows [0-1], but I want to see [0-99] cb = f.colorbar(res) I've tried using cb.set_array, but that didn't appear to do anything, and cb.set_clim, but that rescales the colors completely. Thanks in advance for any help :)

    Read the article

  • twisted reactor stops too early

    - by pygabriel
    I'm doing a batch script to connect to a tcp server and then exiting. My problem is that I can't stop the reactor, for example: cmd = raw_input("Command: ") # custom factory, the protocol just send a line reactor.connectTCP(HOST,PORT, CommandClientFactory(cmd) d = defer.Deferred() d.addCallback(lambda x: reactor.stop()) reactor.callWhenRunning(d.callback,None) reactor.run() In this code the reactor stops before that the tcp connection is done and the cmd is passed. How can I stop the reactor after that all the operation are finished?

    Read the article

  • How to get bit rotation function to accept any bit size?

    - by calccrypto
    i have these 2 functions i got from some other code def ROR(x, n): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (32 - n)) def ROL(x, n): return ROR(x, 32 - n) and i wanted to use them in a program, where 16 bit rotations are required. however, there are also other functions that require 32 bit rotations, so i wanted to leave the 32 in the equation, so i got: def ROR(x, n, bits = 32): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (bits - n)) def ROL(x, n, bits = 32): return ROR(x, bits - n) however, the answers came out wrong when i tested this set out. yet, the values came out correctly when the code is def ROR(x, n): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (16 - n)) def ROL(x, n,bits): return ROR(x, 16 - n) what is going on and how do i fix this?

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >