Search Results

Search found 13683 results on 548 pages for 'python sphinx'.

Page 165/548 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • python destructuring-bind dictionary contents

    - by Stephen
    Hi, I am trying to 'destructure' a dictionary and associate values with variables names after its keys. Something like params = {'a':1,'b':2} a,b = params.values() but since dictionaries are not ordered, there is no guarantee that params.values() will return values in the order of (a,b). Is there a nice way to do this? Thanks

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • building a pairwise matrix in scipy/numpy in Python from dictionaries

    - by user248237
    I have a dictionary whose keys are strings and values are numpy arrays, e.g.: data = {'a': array([1,2,3]), 'b': array([4,5,6]), 'c': array([7,8,9])} I want to compute a statistic between all pairs of values in 'data' and build an n by x matrix that stores the result. Assume that I know the order of the keys, i.e. I have a list of "labels": labels = ['a', 'b', 'c'] What's the most efficient way to compute this matrix? I can compute the statistic for all pairs like this: result = [] for elt1, elt2 in itertools.product(labels, labels): result.append(compute_statistic(data[elt1], data[elt2])) But I want result to be a n by n matrix, corresponding to "labels" by "labels". How can I record the results as this matrix? thanks.

    Read the article

  • Surface Area of a Spheroid in Python

    - by user3678321
    I'm trying to write a function that calculates the surface area of a prolate or oblate spheroid. Here's a link to where I got the formulas (http://en.wikipedia.org/wiki/Prolate_spheroid & http://en.wikipedia.org/wiki/Oblate_spheroid). I think I've written them wrong, but here is my code so far; from math import pi, sqrt, asin, degrees, tanh def checkio(height, width): height = float(height) width = float(width) lst = [] if height == width: r = 0.5 * width surface_area = 4 * pi * r**2 surface_area = round(surface_area, 2) lst.append(surface_area) elif height > width: #If spheroid is prolate a = 0.5 * width b = 0.5 * height e = 1 - a / b surface_area = 2 * pi * a**2 * (1 + b / a * e * degrees(asin**-1(e))) surface_area = round(surface_area, 2) lst.append(surface_area) elif height < width: #If spheroid is oblate a = 0.5 * height b = 0.5 * width e = 1 - b / a surface_area = 2 * pi * a**2 * (1 + 1 - e**2 / e * tanh**-1(e)) surface_area = round(surface_area, 2) lst.append(surface_area, 2) return lst

    Read the article

  • importing classes python

    - by Richard
    Just wondering why import sys exit(0) gives me this error: Traceback (most recent call last): File "<pyshell#1>", line 1, in ? exit(0) TypeError: 'str' object is not callable but from sys import exit exit(0) works fine?

    Read the article

  • Python + Expat: Error on &#0; entities

    - by clacke
    I have written a small function, which uses ElementTree and xpath to extract the text contents of certain elements in an xml file: #!/usr/bin/env python2.5 import doctest from xml.etree import ElementTree from StringIO import StringIO def parse_xml_etree(sin, xpath): """ Takes as input a stream containing XML and an XPath expression. Applies the XPath expression to the XML and returns a generator yielding the text contents of each element returned. >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem1').next() 'one' >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem2').next() 'two' >>> parse_xml_etree( ... StringIO('<test><null>&#0;</null><elem3>three</elem3></test>'), ... '//elem2').next() 'three' """ tree = ElementTree.parse(sin) for element in tree.findall(xpath): yield element.text if __name__ == '__main__': doctest.testmod(verbose=True) The third test fails with the following exception: ExpatError: reference to invalid character number: line 1, column 13 Is the � entity illegal XML? Regardless whether it is or not, the files I want to parse contain it, and I need some way to parse them. Any suggestions for another parser than Expat, or settings for Expat, that would allow me to do that?

    Read the article

  • printing dynamically string in one line in python

    - by EngHamoud
    I'm trying to print strings in one line. I've found solutions but they don't works with windows correctly. I have text file contains names and I want to print them like this name=john then change john to next name and keep name=, I've made this code but didn't work correctly with windows: op = open('names.txt','r') print 'name=', for i in op.readlines(): print '\r'+i.strip('\n') thank you for your time

    Read the article

  • Using Python read specific (column, row) values from a txt file

    - by user2955708
    I would like to read values from a text file until that is a float value. Lets say I have the following file: 14:53:55 Load 300 Speed 200 Distance,m Fz Fx Speed 0.0000 249 4 6.22 0.0002 247 33 16.29 0.0004 246 49 21.02 0.2492 240 115 26.97 0.2494 241 112 21.78 0.2496 240 109 13.09 0.2498 169 79 0.27 Distance,m Fz Fx Speed 0.0000 249 4 7.22 0.0002 247 33 1.29 0.0004 246 49 271.02 0.2492 240 115 26.97 0.2494 241 112 215.78 0.2496 240 109 13.09 0.2498 169 79 0.27 And I need only the values under the first Distance column. So something like skip the first few rows, then read values from the first column while it is float. Thanks your help in advance

    Read the article

  • python list/dict property best practice

    - by jterrace
    I have a class object that stores some properties that are lists of other objects. Each of the items in the list has an identifier that can be accessed with the id property. I'd like to be able to read and write from these lists but also be able to access a dictionary keyed by their identifier. Let me illustrate with an example: class Child(object): def __init__(self, id, name): self.id = id self.name = name class Teacher(object): def __init__(self, id, name): self.id = id self.name = name class Classroom(object): def __init__(self, children, teachers): self.children = children self.teachers = teachers classroom = Classroom([Child('389','pete')], [Teacher('829','bob')]) This is a silly example, but it illustrates what I'm trying to do. I'd like to be able to interact with the classroom object like this: #access like a list print classroom.children[0] #append like it's a list classroom.children.append(Child('2344','joe')) #delete from like it's a list classroom.children.pop(0) But I'd also like to be able to access it like it's a dictionary, and the dictionary should be automatically updated when I modify the list: #access like a dict print classroom.childrenById['389'] I realize I could just make it a dict, but I want to avoid code like this: classroom.childrendict[child.id] = child I also might have several of these properties, so I don't want to add functions like addChild, which feels very un-pythonic anyway. Is there a way to somehow subclass dict and/or list and provide all of these functions easily with my class's properties? I'd also like to avoid as much code as possible.

    Read the article

  • Python If Statement Defaults to an elif

    - by Brad Carvalho
    Not sure why my code is defaulting to this elif. But it's never getting to the else statement. Even going as far as throwing index out of bound errors in the last elif. Please disregard my non use of regex. It wasn't allowed for this homework assignment. The problem is the last elif before the else statement. Cheers, Brad if item == '': print ("%s\n" % item).rstrip('\n') elif item.startswith('MOVE') and not item.startswith('MOVEI'): print 'Found MOVE' elif item.startswith('MOVEI'): print 'Found MOVEI' elif item.startswith('BGT'): print 'Found BGT' elif item.startswith('ADD'): print 'Found ADD' elif item.startswith('INC'): print 'Found INC' elif item.startswith('SUB'): print 'Found SUB' elif item.startswith('DEC'): print 'Found DEC' elif item.startswith('MUL'): print 'Found MUL' elif item.startswith('DIV'): print 'Found DIV' elif item.startswith('BEQ'): print 'Found BEQ' elif item.startswith('BLT'): print 'Found BLT' elif item.startswith('BR'): print 'Found BR' elif item.startswith('END'): print 'Found END' elif item.find(':') and item[(item.find(':') -1)].isalpha(): print 'Mya have found a label' else: print 'Not sure what I found'

    Read the article

  • encapsulation in python list (want to use " instead of ')

    - by Codehai
    I have a list of users users["pirates"] and they're stored in the format ['pirate1','pirate2']. If I hand the list over to a def and query for it in MongoDB, it returns data based on the first index (e.g. pirate1) only. If I hand over a list in the format ["pirate1","pirate"], it returns data based on all the elements in the list. So I think there's something wrong with the encapsulation of the elements in the list. My question: can I change the encapsulation from ' to " without replacing every ' on every element with a loop manually? Short Example: aList = list() # get pirate Stuff # users["pirates"] is a list returned by a former query # so e.g. users["pirates"][0] may be peter without any quotes for pirate in users["pirates"]: aList.append(pirate) aVar = pirateDef(aList) print(aVar) the definition: def pirateDef(inputList = list()): # prepare query col = mongoConnect().MYCOL # query for pirates Arrrr pirates = col.find({ "_id" : {"$in" : inputList}} ).sort("_id",1).limit(50) # loop over users userList = list() for person in pirates: # do stuff that has nothing to do with the problem # append user to userlist userList.append(person) return userList If the given list has ' encapsulation it returns: 'pirates': [{'pirate': 'Arrr', '_id': 'blabla'}] If capsulated with " it returns: 'pirates' : [{'_id': 'blabla', 'pirate' : 'Arrr'}, {'_id': 'blabla2', 'pirate' : 'cheers'}] EDIT: I tried figuring out, that the problem has to be in the MongoDB query. The list is handed over to the Def correctly, but after querying pirates only consists of 1 element... Thanks for helping me Codehai

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • python function that returns a function from list of functions

    - by thkang
    I want to make following function: 1)input is a number. 2)functions are indexed, return a function whose index matches given number here's what I came up with: def foo_selector(whatfoo): def foo1(): return def foo2(): return def foo3(): return ... def foo999(): return #something like return foo[whatfoo] the problem is, how can I index the functions (foo#)? I can see functions foo1 to foo999 by dir(). however, dir() returns name of such functions, not the functions themselves. In the example, those foo-functions aren't doing anything. However in my program they perform different tasks and I can't automatically generate them. I write them myself, and have to return them by their name.

    Read the article

  • optimize python code

    - by user283405
    i have code that uses BeautifulSoup library for parsing. But it is very slow. The code is written in such a way that threads cannot be used. Can anyone help me about this? I am using beautifulsoup library for parsing and than save in DB. if i comment the save statement, than still it takes time so there is no problem with database. def parse(self,text): soup = BeautifulSoup(text) arr = soup.findAll('tbody') for i in range(0,len(arr)-1): data=Data() soup2 = BeautifulSoup(str(arr[i])) arr2 = soup2.findAll('td') c=0 for j in arr2: if str(j).find("<a href=") > 0: data.sourceURL = self.getAttributeValue(str(j),'<a href="') else: if c == 2: data.Hits=j.renderContents() #and few others... #... c = c+1 data.save() Any suggestions? Note: I already ask this question here but that was closed due to incomplete information.

    Read the article

  • Custom constructors for models in Google App Engine (python)

    - by Nikhil Chelliah
    I'm getting back to programming for Google App Engine and I've found, in old, unused code, instances in which I wrote constructors for models. It seems like a good idea, but there's no mention of it online and I can't test to see if it works. Here's a contrived example, with no error-checking, etc.: class Dog(db.Model): name = db.StringProperty(required=True) breeds = db.StringListProperty() age = db.IntegerProperty(default=0) def __init__(self, name, breed_list, **kwargs): db.Model.__init__(**kwargs) self.name = name self.breeds = breed_list.split() rufus = Dog('Rufus', 'spaniel terrier labrador') rufus.put() The **kwargs are passed on to the Model constructor in case the model is constructed with a specified parent or key_name, or in case other properties (like age) are specified. This constructor differs from the default in that it requires that a name and breed_list be specified (although it can't ensure that they're strings), and it parses breed_list in a way that the default constructor could not. Is this a legitimate form of instantiation, or should I just use functions or static/class methods? And if it works, why aren't custom constructors used more often?

    Read the article

  • Python finding substring between certain characters using regex and replace()

    - by jCuga
    Suppose I have a string with lots of random stuff in it like the following: strJunk ="asdf2adsf29Value=five&lakl23ljk43asdldl" And I'm interested in obtaining the substring sitting between 'Value=' and '&', which in this example would be 'five'. I can use a regex like the following: match = re.search(r'Value=?([^&>]+)', strJunk) >>> print match.group(0) Value=five >>> print match.group(1) five How come match.group(0) is the whole thing 'Value=five' and group(1) is just 'five'? And is there a way for me to just get 'five' as the only result? (This question stems from me only having a tenuous grasp of regex) I am also going to have to make a substitution in this string such such as the following: val1 = match.group(1) strJunk.replace(val1, "six", 1) Which yields: 'asdf2adsf29Value=six&lakl23ljk43asdldl' Considering that I plan on performing the above two tasks (finding the string between 'Value=' and '&', as well as replacing that value) over and over, I was wondering if there are any other more efficient ways of looking for the substring and replacing it in the original string. I'm fine sticking with what I've got but I just want to make sure that I'm not taking up more time than I have to be if better methods are out there.

    Read the article

  • Convert a GTK python script to C

    - by Jessica
    The following script will take a screenshot on a Gnome desktop. import gtk.gdk w = gtk.gdk.get_default_root_window() sz = w.get_size() pb = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,False, 8, sz[0], sz[1]) pb = pb.get_from_drawable(w, w.get_colormap(), 0, 0, 0, 0, sz[0], sz[1]) if (pb != None): pb.save("screenshot.png", "png") print "Screenshot saved to screenshot.png." else: print "Unable to get the screenshot." Now, I've been trying to convert this to C and use it in one of the apps I am writing but so far i've been unsuccessful. Is there any what to do this in C (on Linux)? Thanks! Jess.

    Read the article

  • Join a list of lists together into 1 list in Python

    - by dotty
    Hay All. I have a list which consists of many lists, here is an example [ [Obj, Obj, Obj, Obj], [Obj], [Obj], [ [Obj,Obj], [Obj,Obj,Obj] ] ] Is there a way to join all these items together as 1 list, so the output will be something like [Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj] Thanks

    Read the article

  • How to find links and modify an Html using BeautifulSoup in Python

    - by systempuntoout
    Starting from an Html input like this: <p> <a href="http://www.foo.com">this if foo</a> <a href="http://www.bar.com">this if bar</a> </p> using BeautifulSoup, i would like to change this Html in: <p> <a href="http://www.foo.com">this if foo[1]</a> <a href="http://www.bar.com">this if bar[2]</a> </p> saving parsed links in a dictionary with a result like this: links_dict = {"1":"http://www.foo.com","2":"http://www.bar.com"} Is it possible to do this using BeautifulSoup? Any valid alternative?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >