Search Results

Search found 34110 results on 1365 pages for 'gdata python client'.

Page 493/1365 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • Find subset with K elements that are closest to eachother

    - by Nima
    Given an array of integers size N, how can you efficiently find a subset of size K with elements that are closest to each other? Let the closeness for a subset (x1,x2,x3,..xk) be defined as: 2 <= N <= 10^5 2 <= K <= N constraints: Array may contain duplicates and is not guaranteed to be sorted. My brute force solution is very slow for large N, and it doesn't check if there's more than 1 solution: N = input() K = input() assert 2 <= N <= 10**5 assert 2 <= K <= N a = [] for i in xrange(0, N): a.append(input()) a.sort() minimum = sys.maxint startindex = 0 for i in xrange(0,N-K+1): last = i + K tmp = 0 for j in xrange(i, last): for l in xrange(j+1, last): tmp += abs(a[j]-a[l]) if(tmp > minimum): break if(tmp < minimum): minimum = tmp startindex = i #end index = startindex + K? Examples: N = 7 K = 3 array = [10,100,300,200,1000,20,30] result = [10,20,30] N = 10 K = 4 array = [1,2,3,4,10,20,30,40,100,200] result = [1,2,3,4]

    Read the article

  • Twisted - how to create multi protocol process and send the data between the protocols

    - by SpankMe
    Hey, Im trying to write a program that would be listening for data (simple text messages) on some port (say tcp 6666) and then pass them to one or more different protocols - irc, xmpp and so on. I've tried many approaches and digged the Internet, but I cant find easy and working solution for such task. The code I am currently fighting with is here: http://pastebin.com/ri7caXih I would like to know how to from object like: ircf = ircFactory('asdfasdf', '#asdf666') get access to self protocol methods, because this: self.protocol.dupa1(msg) returns error about self not being passed to active protocol object. Or maybe there is other, better, easier and more kosher way to create single reactor with multiple protocols and have actions triggeres when a message arrives on any of them, and then pass that message to other protocols for handling/processing/sending? Any help will be highly appreciated!

    Read the article

  • How does jQuery .data() work?

    - by kazanaki
    My Javascript knowledge is pretty limited. Instead of asking several javascript questions I got the "message" from Stack overflow and started using jQuery right away in order to save me some time. However several times I do not undestand the "magic" behind jQuery and I would love to learn the details. I want to use .data() in my application. The examples are very helpful. I do not understand however WHERE these values are stored. I inspect the webpage with Firebug and as soon as .data() saves an object to a dom element, I do not see any change in Firebug (either HTML or Dom tabs). I tried to look at jQuery source, but it is very advanced for my Javascript knowledge and I lost myself. So the question is: Where do the values stored by jQuery.data() actually go? Can I inspect/locate/list/debug them using a tool?

    Read the article

  • How to get bit rotation function to accept any bit size?

    - by calccrypto
    i have these 2 functions i got from some other code def ROR(x, n): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (32 - n)) def ROL(x, n): return ROR(x, 32 - n) and i wanted to use them in a program, where 16 bit rotations are required. however, there are also other functions that require 32 bit rotations, so i wanted to leave the 32 in the equation, so i got: def ROR(x, n, bits = 32): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (bits - n)) def ROL(x, n, bits = 32): return ROR(x, bits - n) however, the answers came out wrong when i tested this set out. yet, the values came out correctly when the code is def ROR(x, n): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (16 - n)) def ROL(x, n,bits): return ROR(x, 16 - n) what is going on and how do i fix this?

    Read the article

  • Duplicate django query set?

    - by Piotr Czapla
    I have a simple django's query set like: qs = AModel.objects.exclude(state="F").order_by("order") I'd like to use it as follows: qs[0:3].update(state='F') expected = qs[3] # throws error here But last statement throws: "Cannot update a query once a slice has been taken." How can I duplicate the query set?

    Read the article

  • Is django orm & templates thread safe?

    - by Piotr Czapla
    I'm using django orm and templates to create a background service that is ran as management command. Do you know if django is thread safe? I'd like to use threads to speed up processing. The processing is blocked by I/O not CPU so I don't care about performance hit caused by GIL.

    Read the article

  • Trouble with encoding and urllib

    - by Ockonal
    Hello, I'm loading web-page using urllib. Ther eis russian symbols, but page encoding is 'utf-8' 1 pageData = unicode(requestHandler.read()).decode('utf-8') UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 262: ordinal not in range(128) 2 pageData = requestHandler.read() soupHandler = BeautifulSoup(pageData) print soupHandler.findAll(...) UnicodeEncodeError: 'ascii' codec can't encode characters in position 340-345: ordinal not in range(128)

    Read the article

  • Import boto from local library

    - by ensnare
    I'm trying to use boto as a downloaded library, rather than installing it globally on my machine. I'm able to import boto, but when I run boto.connect_dynamodb() I get an error: ImportError: No module named dynamodb.layer2 Here's my file structure: project/ project/ __init__.py libraries/ __init__.py flask/ boto/ views/ .... modules/ __init__.py db.py .... templates/ .... static/ .... runserver.py And the contents of the relevant files as follows: project/project/modules/db.py from project.libraries import boto conn = boto.connect_dynamodb( aws_access_key_id='<YOUR_AWS_KEY_ID>', aws_secret_access_key='<YOUR_AWS_SECRET_KEY>') What am I doing wrong? Thanks in advance.

    Read the article

  • Condition checking vs. Exception handling

    - by Aidas Bendoraitis
    When is exception handling more preferable than condition checking? There are many situations where I can choose using one or the other. For example, this is a summing function which uses a custom exception: # module mylibrary class WrongSummand(Exception): pass def sum_(a, b): """ returns the sum of two summands of the same type """ if type(a) != type(b): raise WrongSummand("given arguments are not of the same type") return a + b # module application using mylibrary from mylibrary import sum_, WrongSummand try: print sum_("A", 5) except WrongSummand: print "wrong arguments" And this is the same function, which avoids using exceptions # module mylibrary def sum_(a, b): """ returns the sum of two summands if they are both of the same type """ if type(a) == type(b): return a + b # module application using mylibrary from mylibrary import sum_ c = sum_("A", 5) if c is not None: print c else: print "wrong arguments" I think that using conditions is always more readable and manageable. Or am I wrong? What are the proper cases for defining APIs which raise exceptions and why?

    Read the article

  • Unable to control requests for static files on Google App Engine

    - by dan
    My simple GAE app is not redirecting to the /static directory for requests when url is multiple levels. Dir structure: /app/static/css/main.css App: I have two handlers one for /app and one for /app/new app.yaml: handlers: - url: /static static_dir: static - url: /app/static/(.*) static_dir: static\1 - url: /app/.* script: app.py login: required HTML: Description: When page is loaded from /app HTTP request for main.css is successful GET /static/css/main.css But when page is loaded from /app/new I see the following request: GET /app/static/css/main.cs That's when I tried adding the /app/static/(.*) in the app.yaml but it is not having any effect.

    Read the article

  • How to make scipy.interpolate give a an extrapolated result beyond the input range?

    - by Salim Fadhley
    I'm trying to port a program which uses a hand-rolled interpolator (developed by a mathematitian colleage) over to use the interpolators provided by scipy. I'd like to use or wrap the scipy interpolator so that it has as close as possible behavior to the old interpolator. A key difference between the two functions is that in our original interpolator - if the input value is above or below the input range, our original interpolator will extrapolate the result. If you try this with the scipy interpolator it raises a ValueError. Consider this program as an example: import numpy as np from scipy import interpolate x = np.arange(0,10) y = np.exp(-x/3.0) f = interpolate.interp1d(x, y) print f(9) print f(11) # Causes ValueError, because it's greater than max(x) Is there a sensible way to make it so that instead of crashing, the final line will simply do a linear extrapolate, continuing the gradients defined by the first and last two pouints to infinity. Note, that in the real software I'm not actually using the exp function - that's here for illustration only!

    Read the article

  • How do I store multiple copies of the same field in Django?

    - by Alistair
    I'm storing OLAC metadata which describes linguistic resources. Many of the elements of the metadata are repeatable -- for example, a resource can have two languages, three authors and four dates associated with it. Is there any way of storing this in one model? It seems like overkill to define a model for each repeatable metadata element -- especially since the models will only have one field: it's value.

    Read the article

  • Making all variables accessible to namespace

    - by Gökhan Sever
    Hello, Say I have a simple function: def myfunc(): a = 4.2 b = 5.5 ... many similar variables ... I use this function one time only and I am wondering what is the easiest way to make all the variables inside the function accessible to my main name-space. Do I have to declare global for each item? or any other suggested methods? Thanks.

    Read the article

  • How to create instances of related models in Django

    - by sevennineteen
    I'm working on a CMSy app for which I've implemented a set of models which allow for creation of custom Template instances, made up of a number of Fields and tied to a specific Customer. The end-goal is that one or more templates with a set of custom fields can be defined through the Admin interface and associated to a customer, so that customer can then create content objects in the format prescribed by the template. I seem to have gotten this hooked up such that I can create any number of Template objects, but I'm struggling with how to create instances - actual content objects - in those templates. For example, I can define a template "Basic Page" for customer "Acme" which has the fields "Title" and "Body", but I haven't figured out how to create Basic Page instances where these fields can be filled in. Here are my (somewhat elided) models... class Customer(models.Model): ... class Field(models.Model): ... class Template(models.Model): label = models.CharField(max_length=255) clients = models.ManyToManyField(Customer, blank=True) fields = models.ManyToManyField(Field, blank=True) class ContentObject(models.Model): label = models.CharField(max_length=255) template = models.ForeignKey(Template) author = models.ForeignKey(User) customer = models.ForeignKey(Customer) mod_date = models.DateTimeField('Modified Date', editable=False) def __unicode__(self): return '%s (%s)' % (self.label, self.template) def save(self): self.mod_date = datetime.datetime.now() super(ContentObject, self).save() Thanks in advance for any advice!

    Read the article

  • How to add a context processor from a Django app

    - by Edan Maor
    Say I'm writing a Django app, and all the templates in the app require a certain variable. The "classic" way to deal with this, afaik, is to write a context processor and add it to TEMPLATE_CONTEXT_PROCESSORS in the settings.py. My question is, is this the right way to do it, considering that apps are supposed to be "independent" from the actual project using them? In other words, when deploying that app to a new project, is there any way to avoid the project having to explicitly mess around with its settings?

    Read the article

  • twisted reactor stops too early

    - by pygabriel
    I'm doing a batch script to connect to a tcp server and then exiting. My problem is that I can't stop the reactor, for example: cmd = raw_input("Command: ") # custom factory, the protocol just send a line reactor.connectTCP(HOST,PORT, CommandClientFactory(cmd) d = defer.Deferred() d.addCallback(lambda x: reactor.stop()) reactor.callWhenRunning(d.callback,None) reactor.run() In this code the reactor stops before that the tcp connection is done and the cmd is passed. How can I stop the reactor after that all the operation are finished?

    Read the article

  • Execute function without sending 'self' to it

    - by Sergey
    Is that possible to define a function without referencing to self this way? def myfunc(var_a,var_b) But so that it could also get sender data, like if I defined it like this: def myfunc(self, var_a,var_b) That self is always the same so it looks a little redundant here always to run a function this way: myfunc(self,'data_a','data_b'). Then I would like to get its data in the function like this sender.fields. UPDATE: Here is some code to understand better what I mean. The class below is used to show a page based on Jinja2 templates engine for users to sign up. class SignupHandler(webapp.RequestHandler): def get(self, *args, **kwargs): utils.render_template(self, 'signup.html') And this code below is a render_template that I created as wrapper to Jinja2 functions to use it more conveniently in my project: def render_template(response, template_name, vars=dict(), is_string=False): template_dirs = [os.path.join(root(), 'templates')] logging.info(template_dirs[0]) env = Environment(loader=FileSystemLoader(template_dirs)) try: template = env.get_template(template_name) except TemplateNotFound: raise TemplateNotFound(template_name) content = template.render(vars) if is_string: return content else: response.response.out.write(content) As I use this function render_template very often in my project and usually the same way, just with different template files, I wondered if there was a way to get rid of having to call it like I do it now, with self as the first argument but still having access to that object.

    Read the article

  • initialize a numpy array

    - by Curious2learn
    Is there way to initialize a numpy array of a shape and add to it? I will explain what I need with a list example. If I want to create a list of objects generated in a loop, I can do: a = [] for i in range(5): a.append(i) I want to do something similar with a numpy array. I know about vstack, concatenate etc. However, it seems these require two numpy arrays as inputs. What I need is: big_array # Initially empty. This is where I don't know what to specify for i in range(5): array i of shape = (2,4) created. add to big_array The big_array should have a shape (10,4). How to do this? Thanks for your help.

    Read the article

  • How can I draw a log-normalized imshow plot with a colorbar representing the raw data in matplotlib

    - by Adam Fraser
    I'm using matplotlib to plot log-normalized images but I would like the original raw image data to be represented in the colorbar rather than the [0-1] interval. I get the feeling there's a more matplotlib'y way of doing this by using some sort of normalization object and not transforming the data beforehand... in any case, there could be negative values in the raw image. import matplotlib.pyplot as plt import numpy as np def log_transform(im): '''returns log(image) scaled to the interval [0,1]''' try: (min, max) = (im[im > 0].min(), im.max()) if (max > min) and (max > 0): return (np.log(im.clip(min, max)) - np.log(min)) / (np.log(max) - np.log(min)) except: pass return im a = np.ones((100,100)) for i in range(100): a[i] = i f = plt.figure() ax = f.add_subplot(111) res = ax.imshow(log_transform(a)) # the colorbar drawn shows [0-1], but I want to see [0-99] cb = f.colorbar(res) I've tried using cb.set_array, but that didn't appear to do anything, and cb.set_clim, but that rescales the colors completely. Thanks in advance for any help :)

    Read the article

  • Generating two thumbnails from the same image in Django

    - by Titus
    Hello, this seems like quite an easy problem but I can't figure out what is going on here. Basically, what I'd like to do is create two different thumbnails from one image on a Django model. What ends up happening is that it seems to be looping and recreating the same image (while appending an underscore to it each time) until it throws up an error that the filename is to big. So, you end up something like: OSError: [Errno 36] File name too long: 'someimg________________etc.jpg' Here is the code: def save(self, *args, **kwargs): if self.image: iname = os.path.split(self.image.name)[-1] fname, ext = os.path.splitext(iname) tlname, tsname = fname + '_thumb_l' + ext, fname + '_thumb_s' + ext self.thumb_large.save(tlname, make_thumb(self.image, size=(250,250))) self.thumb_small.save(tsname, make_thumb(self.image, size=(100,100))) super(Artist, self).save(*args, **kwargs) def make_thumb(infile, size=(100,100)): infile.seek(0) image = Image.open(infile) if image.mode not in ('L', 'RGB'): image.convert('RGB') image.thumbnail(size, Image.ANTIALIAS) temp = StringIO() image.save(temp, 'png') return ContentFile(temp.getvalue()) I didn't show imports for the sake of brevity. Assume there are two ImageFields on the Artist model: thumb_large, and thumb_small. If this isn't the correct way to do it, I'd appreciate any feedback. Thanks!

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >