Search Results

Search found 16680 results on 668 pages for 'python datetime'.

Page 175/668 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • building a pairwise matrix in scipy/numpy in Python from dictionaries

    - by user248237
    I have a dictionary whose keys are strings and values are numpy arrays, e.g.: data = {'a': array([1,2,3]), 'b': array([4,5,6]), 'c': array([7,8,9])} I want to compute a statistic between all pairs of values in 'data' and build an n by x matrix that stores the result. Assume that I know the order of the keys, i.e. I have a list of "labels": labels = ['a', 'b', 'c'] What's the most efficient way to compute this matrix? I can compute the statistic for all pairs like this: result = [] for elt1, elt2 in itertools.product(labels, labels): result.append(compute_statistic(data[elt1], data[elt2])) But I want result to be a n by n matrix, corresponding to "labels" by "labels". How can I record the results as this matrix? thanks.

    Read the article

  • importing classes python

    - by Richard
    Just wondering why import sys exit(0) gives me this error: Traceback (most recent call last): File "<pyshell#1>", line 1, in ? exit(0) TypeError: 'str' object is not callable but from sys import exit exit(0) works fine?

    Read the article

  • How to replace only part of the match with python re.sub

    - by Arty
    I need to match two cases by one reg expression and do replacement 'long.file.name.jpg' - 'long.file.name_suff.jpg' 'long.file.name_a.jpg' - 'long.file.name_suff.jpg' I'm trying to do the following re.sub('(\_a)?\.[^\.]*$' , '_suff.',"long.file.name.jpg") But this is cut the extension '.jpg' and I'm getting long.file.name_suff. instead of long.file.name_suff.jpg I understand that this is because of [^.]*$ part, but I can't exclude it, because I have to find last occurance of '_a' to replace or last '.' Is there a way to replace only part of the match?

    Read the article

  • using Python reduce over a list of pairs

    - by user248237
    I'm trying to pair together a bunch of elements in a list to create a final object, in a way that's analogous to making a sum of objects. I'm trying to use a simple variation on reduce where you consider a list of pairs rather than a flat list to do this. I want to do something along the lines of: nums = [1, 2, 3] reduce(lambda x, y: x + y, nums) except I'd like to add additional information to the sum that is specific to each element in the list of numbers nums. For example for each pair (a,b) in the list, run the sum as (a+b): nums = [(1, 0), (2, 5), (3, 10)] reduce(lambda x, y: (x[0]+x[1]) + (y[0]+y[1]), nums) This does not work: >>> reduce(lambda x, y: (x[0]+x[1]) + (y[0]+y[1]), nums) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <lambda> TypeError: 'int' object is unsubscriptable Why does it not work? I know I can encode nums as a flat list - that is not the point - I just want to be able to create a reduce operation that can iterate over a list of pairs, or over two lists of the same length simultaneously and pool information from both lists. thanks.

    Read the article

  • How to find links and modify an Html using BeautifulSoup in Python

    - by systempuntoout
    Starting from an Html input like this: <p> <a href="http://www.foo.com">this if foo</a> <a href="http://www.bar.com">this if bar</a> </p> using BeautifulSoup, i would like to change this Html in: <p> <a href="http://www.foo.com">this if foo[1]</a> <a href="http://www.bar.com">this if bar[2]</a> </p> saving parsed links in a dictionary with a result like this: links_dict = {"1":"http://www.foo.com","2":"http://www.bar.com"} Is it possible to do this using BeautifulSoup? Any valid alternative?

    Read the article

  • Help with python list-comprehension

    - by leChuck
    A simplified version of my problem: I have a list comprehension that i use to set bitflags on a two dimensional list so: s = FLAG1 | FLAG2 | FLAG3 [[c.set_state(s) for c in row] for row in self.__map] All set_state does is: self.state |= f This works fine but I have to have this function "set_state" in every cell in __map. Every cell in __map has a .state so what I'm trying to do is something like: [[c.state |= s for c in row] for row in self.map] or map(lambda c: c.state |= s, [c for c in row for row in self.__map]) Except that neither works (Syntax error). Perhaps I'm barking up the wrong tree with map/lamda but I would like to get rid on set_state. And perhaps know why assignment does not work in the list-comprehension

    Read the article

  • Python method to remove iterability

    - by Debilski
    Suppose I have a function which can either take an iterable/iterator or a non-iterable as an argument. Iterability is checked with try: iter(arg). Depending whether the input is an iterable or not, the outcome of the method will be different. Not when I want to pass a non-iterable as iterable input, it is easy to do: I’ll just wrap it with a tuple. What do I do when I want to pass an iterable (a string for example) but want the function to take it as if it’s non-iterable? E.g. make that iter(str) fails.

    Read the article

  • How to break the following line of python

    - by FrederikNS
    Hello Stack Overflow, I have come upon a couple of lines of code similar to this one, but I'm unsure how I should break it: blueprint = Blueprint(self.blueprint_map[str(self.ui.blueprint_combo.currentText())], runs=self.ui.runs_spin.text(), me=self.ui.me_spin.text(), pe=self.ui.pe_skill_combo.currentIndex()) Thanks in advance

    Read the article

  • python destructuring-bind dictionary contents

    - by Stephen
    Hi, I am trying to 'destructure' a dictionary and associate values with variables names after its keys. Something like params = {'a':1,'b':2} a,b = params.values() but since dictionaries are not ordered, there is no guarantee that params.values() will return values in the order of (a,b). Is there a nice way to do this? Thanks

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Optimizing python link matching regular expression

    - by Matt
    I have a regular expression, links = re.compile('<a(.+?)href=(?:"|\')?((?:https?://|/)[^\'"]+)(?:"|\')?(.*?)>(.+?)</a>',re.I).findall(data) to find links in some html, it is taking a long time on certain html, any optimization advice? One that it chokes on is http://freeyourmindonline.net/Blog/

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • Python: Comparing specific columns in two csv files

    - by coder999
    Say that I have two CSV files (file1 and file2) with contents as shown below: file1: fred,43,Male,"23,45",blue,"1, bedrock avenue" file2: fred,39,Male,"23,45",blue,"1, bedrock avenue" I would like to compare these two CSV records to see if columns 0,2,3,4, and 5 are the same. I don't care about column 1. What's the most pythonic way of doing this? EDIT: Some example code would be appreciated.

    Read the article

  • speed up calling lot of entities, and getting unique values, google app engine python

    - by user291071
    OK this is a 2 part question, I've seen and searched for several methods to get a list of unique values for a class and haven't been practically happy with any method so far. So anyone have a simple example code of getting unique values for instance for this code. Here is my super slow example. class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() def uniqueLinkGet(tabl): start = time.time() dic = {} query = tabl.all() for obj in query: dic[obj.link]=1 end = time.time() print end-start return dic My second question is calling for instance an iterator instead of fetch slower? Is there a faster method to do this code below? Especially if the number of elements called be larger than 1000? query = LinkRating2.all() link1 = 'some random string' a = query.filter('link = ', link1) adic ={} for itema in a: adic[itema.user]=itema.rating2

    Read the article

  • Python: eliminating stack traces into library code?

    - by Mark Harrison
    When I get a runtime exception from the standard library, it's almost always a problem in my code and not in the library code. Is there a way to truncate the exception stack trace so that it doesn't show the guts of the library package? For example, I would like to get this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev TypeError: Data values must be of type string or None. and not this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 276, in __setitem__ _DeadlockWrap(wrapF) # self.db[key] = value File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/dbutils.py", line 68, in DeadlockWrap return function(*_args, **_kwargs) File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 275, in wrapF self.db[key] = value TypeError: Data values must be of type string or None.

    Read the article

  • Python finding substring between certain characters using regex and replace()

    - by jCuga
    Suppose I have a string with lots of random stuff in it like the following: strJunk ="asdf2adsf29Value=five&lakl23ljk43asdldl" And I'm interested in obtaining the substring sitting between 'Value=' and '&', which in this example would be 'five'. I can use a regex like the following: match = re.search(r'Value=?([^&>]+)', strJunk) >>> print match.group(0) Value=five >>> print match.group(1) five How come match.group(0) is the whole thing 'Value=five' and group(1) is just 'five'? And is there a way for me to just get 'five' as the only result? (This question stems from me only having a tenuous grasp of regex) I am also going to have to make a substitution in this string such such as the following: val1 = match.group(1) strJunk.replace(val1, "six", 1) Which yields: 'asdf2adsf29Value=six&lakl23ljk43asdldl' Considering that I plan on performing the above two tasks (finding the string between 'Value=' and '&', as well as replacing that value) over and over, I was wondering if there are any other more efficient ways of looking for the substring and replacing it in the original string. I'm fine sticking with what I've got but I just want to make sure that I'm not taking up more time than I have to be if better methods are out there.

    Read the article

  • Convert a GTK python script to C

    - by Jessica
    The following script will take a screenshot on a Gnome desktop. import gtk.gdk w = gtk.gdk.get_default_root_window() sz = w.get_size() pb = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,False, 8, sz[0], sz[1]) pb = pb.get_from_drawable(w, w.get_colormap(), 0, 0, 0, 0, sz[0], sz[1]) if (pb != None): pb.save("screenshot.png", "png") print "Screenshot saved to screenshot.png." else: print "Unable to get the screenshot." Now, I've been trying to convert this to C and use it in one of the apps I am writing but so far i've been unsuccessful. Is there any what to do this in C (on Linux)? Thanks! Jess.

    Read the article

  • Join a list of lists together into 1 list in Python

    - by dotty
    Hay All. I have a list which consists of many lists, here is an example [ [Obj, Obj, Obj, Obj], [Obj], [Obj], [ [Obj,Obj], [Obj,Obj,Obj] ] ] Is there a way to join all these items together as 1 list, so the output will be something like [Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj] Thanks

    Read the article

  • Easy Python input question

    - by Josh K
    I'd like to have something similar to the following pseudo code: while input is not None and timer < 5: input = getChar() timer = time.time() - start if timer >= 5: print "took too long" else: print input Anyway to do this without threading? I would like an input method that returns whatever has been entered since the last time it was called, or None (null) if nothing was entered.

    Read the article

  • Python OpenGL Can't Redraw Scene

    - by RobbR
    I'm getting started with OpenGL and shaders using GLUT and PyOpenGL. I can draw a basic scene but for some reason I can't get it to update. E.g. any changes I make during idle(), display(), or reshape() are not reflected. Here are the methods: def display(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) glMatrixMode(GL_MODELVIEW) glLoadIdentity() glUseProgram(self.shader_program) self.m_vbo.bind() glEnableClientState( GL_VERTEX_ARRAY ) glVertexPointerf(self.m_vbo) glDrawArrays(GL_TRIANGLES, 0, len(self.m_vbo)) glutSwapBuffers() glutReportErrors() def idle(self): test_change += .1 self.m_vbo = vbo.VBO( array([ [ test_change, 1, 0 ], # triangle [ -1,-1, 0 ], [ 1,-1, 0 ], [ 2,-1, 0 ], # square [ 4,-1, 0 ], [ 4, 1, 0 ], [ 2,-1, 0 ], [ 4, 1, 0 ], [ 2, 1, 0 ], ],'f') ) glutPostRedisplay() def begin(self): glutInit() glutInitWindowSize(400, 400) glutCreateWindow("Simple OpenGL") glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) glutDisplayFunc(self.display) glutReshapeFunc(self.reshape) glutMouseFunc(self.mouse) glutMotionFunc(self.motion) glutIdleFunc(self.idle) self.define_shaders() glutMainLoop() I'd like to implement a time step in idle() but even basic changes to the vertices or tranlastions and rotations on the MODELVIEW matrix don't display. It just puts up the initial state and does not update. Am I missing something?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >