Search Results

Search found 15224 results on 609 pages for 'parallel python'.

Page 431/609 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Converting Numpy Lstsq residual value to R^2

    - by whatnick
    I am performing a least squares regression as below (univariate). I would like to express the significance of the result in terms of R^2. Numpy returns a value of unscaled residual, what would be a sensible way of normalizing this. field_clean,back_clean = rid_zeros(backscatter,field_data) num_vals = len(field_clean) x = field_clean[:,row:row+1] y = 10*log10(back_clean) A = hstack([x, ones((num_vals,1))]) soln = lstsq(A, y ) m, c = soln [0] residues = soln [1] print residues

    Read the article

  • Why is numpy c extension slow?

    - by Bitwise
    I am working on large numpy arrays, and some native numpy operations are too slow for my needs (for example simple operations such as "bitwise" A&B). I started looking into writing C extensions to try and improve performance. As a test case, I tried the example given here, implementing a simple trace calculation. I was able to get it to work, but was surprised by the performance: for a (1000,1000) numpy array, numpy.trace() was about 1000 times faster than the C extension! This happens whether I run it once or many times. Is this expected? Is the C extension overhead that bad? Any ideas how to speed things up?

    Read the article

  • nested list comprehension using intermediate result

    - by KentH
    I am trying to grok the output of a function which doesn't have the courtesy of setting a result code. I can tell it failed by the "error:" string which is mixed into the stderr stream, often in the middle of a different conversion status message. I have the following list comprehension which works, but scans for the "error:" string twice. Since it is only rescanning the actual error lines, it works fine, but it annoys me I can't figure out how to use a single scan. Here's the working code: errors = [e[e.find('error:'):] for e in err.splitlines() if 'error:' in e] The obvious (and wrong) way to simplify is to save the "find" result errors = [e[i:] for i in e.find('error:') if i != -1 for e in err.splitlines()] However, I get "UnboundLocalError: local variable 'e' referenced before assignment". Blindly reversing the 'for's in the comprehension also fails. How is this done? THanks. Kent

    Read the article

  • How to optimize this script

    - by marks34
    I have written the following script. It opens a file, reads each line from it splitting by new line character and deleting first character in line. If line exists it's being added to array. Next each element of array is splitted by whitespace, sorted alphabetically and joined again. Every line is printed because script is fired from console and writes everything to file using standard output. I'd like to optimize this code to be more pythonic. Any ideas ? import sys def main(): filename = sys.argv[1] file = open(filename) arr = [] for line in file: line = line[1:].replace("\n", "") if line: arr.append(line) for line in arr: lines = line.split(" ") lines.sort(key=str.lower) line = ''.join(lines) print line if __name__ == '__main__': main()

    Read the article

  • list in loop, Nonetype errors

    - by user2926755
    Here is my Code def printList(stringlist): empty = [] if stringlist is None: print empty else: print stringlist def add (stringlist, string): string = [] if string is None else string if stringlist is not None: stringlist.insert(0, string) else: stringlist.append(1) it somehow appears "AttributeError: 'NoneType' object has no attribute 'append'" I was originally looking for the code to be run like this: >>> myList = None >>> printList(myList) [] >>> for word in ['laundry','homework','cooking','cleaning']: myList = add(myList, word) printList(myList) [laundry] [homework, laundry] [cooking, homework, laundry] [cleaning, cooking, homework, laundry]

    Read the article

  • Is a string formatter that pulls variables from its calling scope bad practice?

    - by Eric
    I have some code that does an awful lot of string formatting, Often, I end up with code along the lines of: "...".format(x=x, y=y, z=z, foo=foo, ...) Where I'm trying to interpolate a large number of variables into a large string. Is there a good reason not to write a function like this that uses the inspect module to find variables to interpolate? import inspect def interpolate(s): return s.format(**inspect.currentframe().f_back.f_locals) def generateTheString(x): y = foo(x) z = x + y # more calculations go here return interpolate("{x}, {y}, {z}")

    Read the article

  • overriding callbacks avoiding attribute pollution

    - by pygabriel
    I've a class that has some callbacks and its own interface, something like: class Service: def __init__(self): connect("service_resolved", self.service_resolved) def service_resolved(self, a,b c): ''' This function is called when it's triggered service resolved signal and has a lot of parameters''' the connect function is for example the gtkwidget.connect, but I want that this connection is something more general, so I've decided to use a "twisted like" approach: class MyService(Service): def my_on_service_resolved(self, little_param): ''' it's a decorated version of srvice_resolved ''' def service_resolved(self,a,b,c): super(MyService,self).service_resolved(a,b,c) little_param = "something that's obtained from a,b,c" self.my_on_service_resolved(little_param) So I can use MyService by overriding my_on_service_resolved. The problem is the "attributes" pollution. In the real implementation, Service has some attributes that can accidentally be overriden in MyService and those who subclass MyService. How can I avoid attribute pollution? What I've thought is a "wrapper" like approach but I don't know if it's a good solution: class WrapperService(): def __init__(self): self._service = service_resolved # how to override self._service.service_resolved callback? def my_on_service_resolved(self,param): ''' '''

    Read the article

  • Is it possible in SQLAlchemy to filter by a database function or stored procedure?

    - by Rico Suave
    We're using SQLalchemy in a project with a legacy database. The database has functions/stored procedures. In the past we used raw SQL and we could use these functions as filters in our queries. I would like to do the same for SQLAlchemy queries if possible. I have read about the @hybrid_property, but some of these functions need one or more parameters, for example; I have a User model that has a JOIN to a bunch of historical records. These historical records for this user, have a date and a debit and credit field, so we can look up the balance of a user at a specific point in time, by doing a SUM(credit) - SUM(debit) up until the given date. We have a database function for that called dbo.Balance(user_id, date_time). I can use this to check the balance of a user at a given point in time. I would like to use this as a criterium in a query, to select only users that have a negative balance at a specific date/time. selection = users.filter(coalesce(Users.status, 0) == 1, coalesce(Users.no_reminders, 0) == 0, dbo.pplBalance(Users.user_id, datetime.datetime.now()) < -0.01).all() This is of course a non-working example, just for you to get the gist of what I'd like to do. The solution looks to be to use hybrd properties, but as I mentioned above, these only work without parameters (as they are properties, not methods). Any suggestions on how to implement something like this (if it's even possible) are welcome. Thanks,

    Read the article

  • How can I retrieve all the returned variables from a function?

    - by user1447941
    import random def some_function(): example = random.randint(0, 1) if example == 1: other_example = 2 else: return False return example, other_example With this example, there is a chance that either one or two variables will be returned. Usually, for one variable I'd use var = some_function() while for two, var, var2 = some_function(). How can I tell how many variables are being returned by the function?

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

  • related to list and file handling?

    - by kaushik
    i have file with contents in list form such as [1,'ab','fgf','ssd'] [2,'eb','ghf','hhsd'] [3,'ag','rtf','ssfdd'] i want to read that file line by line using f.readline and assign thn to a list so as to use it is the prog as a list for using list properties tried like k=[ ] k=f.readline() print k[1] i xpected a result to show 2nd element in the list in first line but it showed the first bit and gave o/p as '1' how to get the xpected output.. please suggest

    Read the article

  • Django template Path

    - by user74283
    Hi I m following the tutorial on http://docs.djangoproject.com/en/dev/intro/tutorial02/#intro-tutorial02 in windows 7 envoirement. my settings file is TEMPLATE_DIRS = ( 'C:/django-project/myapp/mytemplates/admin' ) i got the base_template from the template admin/base_site.html from within the default Django admin template directory in the source code of Django itself (django/contrib/admin/templates) into an admin subdirectory of myapp directory as the tutorial instructed. It doesn't seem to take affect for some reason. Any clue of what might be the problem? Do i have to do a sync db ?

    Read the article

  • Detecting and interacting with long running process

    - by jacquesb
    I want a script to start and interact with a long running process. The process is started first time the script is executed, after that the script can be executed repeatedly, but will detect that the process is already running. The script should be able to interact with the process. I would like this to work on Unix and Windows. I am unsure how I do this. Specifically how do I detect if the process is already running and open a pipe to it? Should I use sockets (e.g. registering the server process on a known port and then check if it responds) or should I use "named pipes"? Or is there some easier way?

    Read the article

  • Increment number in string

    - by iform
    Hi, I am stumped... I am trying to get the following output until a certain condition is met. test_1.jpg test_2.jpg .. test_50.jpg The solution (if you could remotely call it that) that I have is fileCount = 0 while (os.path.exists(dstPath)): fileCount += 1 parts = os.path.splitext(dstPath) dstPath = "%s_%d%s" % (parts[0], fileCount, parts[1]) however...this produces the following output. test_1.jpg test_1_2.jpg test_1_2_3.jpg .....etc The Question: How do I get change the number in its current place (without appending numbers to the end)? Ps. I'm using this for a file renaming tool.

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

  • Search one element of a list in another list recursively

    - by androidnoob
    I have 2 lists old_name_list = [a-1234, a-1235, a-1236] new_name_list = [(a-1235, a-5321), (a-1236, a-6321), (a-1234, a-4321), ... ] I want to search recursively if the elements in old_name_list exist in new_name_list and returns the associated value with it, for eg. the first element in old_name_list returns a-4321, second element returns a-5321, and so on until old_name_list finishes. I have tried the following and it doesn't work for old_name, new_name in zip(old_name_list, new_name_list): if old_name in new_name[0]: print new_name[1] Is the method I am doing wrong or I have to make some minor changes to it? Thank you in advance.

    Read the article

  • Trying to create a group of button sprites

    - by user1449653
    Good day, I have like 15 images I need to be buttons. I have buttons working with a Box() (Box - looks like this) class Box(pygame.sprite.Sprite): def __init__(self): pygame.sprite.Sprite.__init__(self) self.image = pygame.Surface((35, 30)) self.image = self.image.convert() self.image.fill((255, 0, 0)) self.rect = self.image.get_rect() self.rect.centerx = 25 self.rect.centery = 505 self.dx = 10 self.dy = 10 I am trying to make the buttons work with image sprites. So I attempted to copy the class style of the box and do the same for my Icons.. code looks like this... class Icons(pygame.sprite.Sprite): def __init__(self): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load("images/airbrushIC.gif").convert() self.rect = self.image.get_rect() self.rect.x = 25 self.rect.y = 550 the code in the main() rect = image.get_rect() rect.x = 25 rect.y = 550 ic1 = Icons((screen.get_rect().x, screen.get_rect().y)) screen.blit(ic1.image, ic1.rect) pygame.display.update() This code produces a positional (accepts 1 argument but 2 are there) error or an image is not referenced error (inside the Icon class). I'm unsure if this is the right way to go about this anyways.. I know for sure that I need to load all the images (as sprites)... store them in an array... and then have my mouse check if it is clicking one of the items in the array using a for loop. Thanks. EDIT QUESTION 2: class Icons(pygame.sprite.Sprite): def init(self, *args): pygame.sprite.Sprite.init(self, *args) self.image = pygame.image.load("images/airbrushIC.gif").convert() self.rect = self.image.get_rect() ic1 = self.image self.rect.x = 10 self.rect.y = 490 self.image = pygame.image.load("images/fillIC.gif").convert() self.rect = self.image.get_rect() ic2 = self.image self.rect.x = 10 self.rect.y = 540 Thanks to your help I got the Icons class loading ONE image. Its not loading both. Obviously because its being overwritten by the second one. It seems that "class" for this purpose isn't what I need. Which begs the question how I make sprites outside of a class.. If there is a way to make the class work please let me know.

    Read the article

  • Filtering SQLAlchemy query on attribute_mapped_collection field of relationship

    - by bsa
    I have two classes, Tag and Hardware, defined with a simple parent-child relationship (see the full definition at the end). Now I want to filter a query on Tag using the version field in Hardware through an attribute_mapped_collection, eg: def get_tags(order_code=None, hardware_filters=None): session = Session() query = session.query(Tag) if order_code: query = query.filter(Tag.order_code == order_code) if hardware_filters: for k, v in hardware_filters.iteritems(): query = query.filter(getattr(Tag.hardware, k).version == v) return query.all() But I get: AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with Tag.hardware has an attribute 'baseband The same thing happens if I strip it back by hard-coding the attribute, eg: query.filter(Tag.hardware.baseband.version == v) I can do it this way: query = query.filter(Tag.hardware.any(artefact=k, version=v)) But why can't I filter directly through the attribute? Class definitions class Tag(Base): __tablename__ = 'tag' tag_id = Column(Integer, primary_key=True) order_code = Column(String, nullable=False) version = Column(String, nullable=False) status = Column(String, nullable=False) comments = Column(String) hardware = relationship( "Hardware", backref="tag", collection_class=attribute_mapped_collection('artefact'), ) __table_args__ = ( UniqueConstraint('order_code', 'version'), ) class Hardware(Base): __tablename__ = 'hardware' hardware_id = Column(Integer, primary_key=True) tag_id = Column(String, ForeignKey('tag.tag_id')) product_id = Column(String, nullable=True) artefact = Column(String, nullable=False) version = Column(String, nullable=False)

    Read the article

  • Prevent web2py from caching ?

    - by Joe
    Hi ! I'm working with web2py and for some reason web2py seems to fail to notice when code has changed in certain cases. I can't really narrow it down, but from time to time changes in the code are not reflected, web2py obviously has the old version cached somewhere. The only thing that helps is quitting web2py and restarting it (i'm using the internal server). Any hints ? Thank you !

    Read the article

  • How to quickly parse a list of strings

    - by math
    If I want to split a list of words separated by a delimiter character, I can use >>> 'abc,foo,bar'.split(',') ['abc', 'foo', 'bar'] But how to easily and quickly do the same thing if I also want to handle quoted-strings which can contain the delimiter character ? In: 'abc,"a string, with a comma","another, one"' Out: ['abc', 'a string, with a comma', 'another, one'] Related question: How can i parse a comma delimited string into a list (caveat)?

    Read the article

  • What do I do with a Concrete Syntax Tree?

    - by Cap
    I'm using pyPEG to create a parse tree for a simple grammar. The tree is represented using lists and tuples. Here's an example: [('command', [('directives', [('directive', [('name', 'retrieve')]), ('directive', [('name', 'commit')])]), ('filename', [('name', 'f30502')])])] My question is what do I do with it at this point? I know a lot depends on what I am trying to do, but I haven't been able to find much about consuming/using parse trees, only creating them. Does anyone have any pointers to references I might use? Thanks for your help.

    Read the article

  • Choosing randomly all the elements in the the list just once

    - by Dalek
    How is it possible to randomly choose a number from a list with n elements, n time without picking the same element of the list twice. I wrote a code to choose the sequence number of the elements in the list but it is slow: >>>redshift=np.array([0.92,0.17,0.51,1.33,....,0.41,0.82]) >>>redshift.shape (1225,) exclude=[] k=0 ng=1225 while (k < ng): flag1=0 sq=random.randint(0, ng) while (flag1<1): if sq in exclude: flag1=1 sq=random.randint(0, ng) else: print sq exclude.append(sq) flag1=0 z=redshift[sq] k+=1 It doesn't choose all the sequence number of elements in the list.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >