Search Results

Search found 15224 results on 609 pages for 'parallel python'.

Page 167/609 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • varargs in lambda functions in Python

    - by brain_damage
    Is it possible a lambda function to have variable number of arguments? For example, I want to write a metaclass, which creates a method for every method of some other class and this newly created method returns the opposite value of the original method and has the same number of arguments. And I want to do this with lambda function. How to pass the arguments? Is it possible? class Negate(type): def __new__(mcs, name, bases, _dict): extended_dict = _dict.copy() for (k, v) in _dict.items(): if hasattr(v, '__call__'): extended_dict["not_" + k] = lambda s, *args, **kw: not v(s, *args, **kw) return type.__new__(mcs, name, bases, extended_dict) class P(metaclass=Negate): def __init__(self, a): self.a = a def yes(self): return True def maybe(self, you_can_chose): return you_can_chose But the result is totally wrong: >>>p = P(0) >>>p.yes() True >>>p.not_yes() # should be False Traceback (most recent call last): File "<pyshell#150>", line 1, in <module> p.not_yes() File "C:\Users\Nona\Desktop\p10.py", line 51, in <lambda> extended_dict["not_" + k] = lambda s, *args, **kw: not v(s, *args, **kw) TypeError: __init__() takes exactly 2 positional arguments (1 given) >>>p.maybe(True) True >>>p.not_maybe(True) #should be False True

    Read the article

  • Python/YACC Lexer: Token priority?

    - by Rosarch
    I'm trying to use reserved words in my grammar: reserved = { 'if' : 'IF', 'then' : 'THEN', 'else' : 'ELSE', 'while' : 'WHILE', } tokens = [ 'DEPT_CODE', 'COURSE_NUMBER', 'OR_CONJ', 'ID', ] + list(reserved.values()) t_DEPT_CODE = r'[A-Z]{2,}' t_COURSE_NUMBER = r'[0-9]{4}' t_OR_CONJ = r'or' t_ignore = ' \t' def t_ID(t): r'[a-zA-Z_][a-zA-Z_0-9]*' if t.value in reserved.values(): t.type = reserved[t.value] return t return None However, the t_ID rule somehow swallows up DEPT_CODE and OR_CONJ. How can I get around this? I'd like those two to take higher precedence than the reserved words.

    Read the article

  • Python loop | "do-while" over a tree

    - by johannix
    Is there a more Pythonic way to put this loop together?: while True: children = tree.getChildren() if not children: break tree = children[0] UPDATE: I think this syntax is probably what I'm going to go with: while tree.getChildren(): tree = tree.getChildren()[0]

    Read the article

  • Proper structure for many test cases in Python with unittest

    - by mellort
    I am looking into the unittest package, and I'm not sure of the proper way to structure my test cases when writing a lot of them for the same method. Say I have a fact function which calculates the factorial of a number; would this testing file be OK? import unittest class functions_tester(unittest.TestCase): def test_fact_1(self): self.assertEqual(1, fact(1)) def test_fact_2(self): self.assertEqual(2, fact(2)) def test_fact_3(self): self.assertEqual(6, fact(3)) def test_fact_4(self): self.assertEqual(24, fact(4)) def test_fact_5(self): self.assertFalse(1==fact(5)) def test_fact_6(self): self.assertRaises(RuntimeError, fact, -1) #fact(-1) if __name__ == "__main__": unittest.main() It seems sloppy to have so many test methods for one method. I'd like to just have one testing method and put a ton of basic test cases (ie 4! ==24, 3!==6, 5!==120, and so on), but unittest doesn't let you do that. What is the best way to structure a testing file in this scenario? Thanks in advance for the help.

    Read the article

  • Regex replace (in Python) - a more simple way?

    - by Evan Fosmark
    Any time I want to replace a piece of text that is part of a larger piece of text, I always have to do something like: "(?P<start>some_pattern)(?P<replace>foo)(?P<end>end)" And then concatenate the start group with the new data for replace and then the end group. Is there a better method for this?

    Read the article

  • Python - Compress Ascii String

    - by n0idea
    I'm looking for a way to compress an ascii-based string, any help? I need also need to decompress it. I tried zlib but with no help. What can I do to compress the string into lesser length? code: def compress(request): if request.POST: data = request.POST.get('input') if is_ascii(data): result = zlib.compress(data) return render_to_response('index.html', {'result': result, 'input':data}, context_instance = RequestContext(request)) else: result = "Error, the string is not ascii-based" return render_to_response('index.html', {'result':result}, context_instance = RequestContext(request)) else: return render_to_response('index.html', {}, context_instance = RequestContext(request))

    Read the article

  • Python Game using pyGame with Window Menu elements

    - by Zoja
    Here's the deal. I'm trying to write an arkanoid clone game and the thing is that I need a window menu like you get in pyGTK. For example File-(Open/Save/Exit) .. something like that and opening an "about" context where the author should be written. I'm already using pyGame for writting the game logic. I've tried pgu to write the GUI but that doesn't help me, altough it has those menu elements I'm taking about, you can't include the screen of the game in it's container. Does anybody know how to include such window menus with the usage of pyGame ?

    Read the article

  • Python and sqlite3 - importing and exporting databases

    - by JPC
    I'm trying to write a script to import a database file. I wrote the script to export the file like so: import sqlite3 con = sqlite3.connect('../sqlite.db') with open('../dump.sql', 'w') as f: for line in con.iterdump(): f.write('%s\n' % line) Now I want to be able to import that database. I tried: import sqlite3 con = sqlite3.connect('../sqlite.db') f = open('../dump.sql','r') str = f.read() con.execute(str) but I'm not allowed to execute more than one statement. Is there a way to get it to run a .sql script directly?

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Python string formatting too slow

    - by wich
    I use the following code to log a map, it is fast when it only contains zeroes, but as soon as there is actual data in the map it becomes unbearably slow... Is there any way to do this faster? log_file = open('testfile', 'w') for i, x in ((i, start + i * interval) for i in range(length)): log_file.write('%-5d %8.3f %13g %13g %13g %13g %13g %13g\n' % (i, x, map[0][i], map[1][i], map[2][i], map[3][i], map[4][i], map[5][i]))

    Read the article

  • Implement loops for python 3

    - by Alex
    Implement this loop: total up the product of the numbers from 1 to x. Implement this loop: total up the product of the numbers from a to b. Implement this loop: total up the sum of the numbers from a to b. Implement this loop: total up the sum of the numbers from 1 to x. Implement this loop: count the number of characters in a string s. i'm very lost on implementing loops these are just some examples that i am having trouble with-- if someone could help me understand how to do them that would be awesome

    Read the article

  • PYTHON: Look for match in a nested list

    - by elfuego1
    Hello everybody, I have two nested lists of different sizes: A = [[1, 7, 3, 5], [5, 5, 14, 10]] B = [[1, 17, 3, 5], [1487, 34, 14, 74], [1487, 34, 3, 87], [141, 25, 14, 10]] I'd like to gather all nested lists from list B if A[2:4] == B[2:4] and put it into list L: L = [[1, 17, 3, 5], [141, 25, 14, 10]] Would you help me with this?

    Read the article

  • Dynamic Operator Overloading on dict classes in Python

    - by Ishpeck
    I have a class that dynamically overloads basic arithmetic operators like so... import operator class IshyNum: def __init__(self, n): self.num=n self.buildArith() def arithmetic(self, other, o): return o(self.num, other) def buildArith(self): map(lambda o: setattr(self, "__%s__"%o,lambda f: self.arithmetic(f, getattr(operator, o))), ["add", "sub", "mul", "div"]) if __name__=="__main__": number=IshyNum(5) print number+5 print number/2 print number*3 print number-3 But if I change the class to inherit from the dictionary (class IshyNum(dict):) it doesn't work. I need to explicitly def __add__(self, other) or whatever in order for this to work. Why?

    Read the article

  • Dynamically adding @property in python

    - by rz
    I know that I can dynamically add an instance method to an object by doing something like: import types def my_method(self): # logic of method # ... # instance is some instance of some class instance.my_method = types.MethodType(my_method, instance) Later on I can call instance.my_method() and self will be bound correctly and everything works. Now, my question: how to do the exact same thing to obtain the behavior that decorating the new method with @property would give? I would guess something like: instance.my_method = types.MethodType(my_method, instance) instance.my_method = property(instance.my_method) But, doing that instance.my_method returns a property object.

    Read the article

  • Python and classes

    - by Artyom
    Hello, i have 2 classes. How i call first.TQ in Second ? Without creating object First in Second. class First: def __init__(self): self.str = "" def TQ(self): pass def main(self): T = Second(self.str) # Called here class Second(): def __init__(self): list = {u"RANDINT":first.TQ} # List of funcs maybe called in first ..... ..... return data

    Read the article

  • python: problem with dictionary get method default value

    - by goutham
    I'm having a new problem here .. CODE 1: try: urlParams += "%s=%s&"%(val['name'], data.get(val['name'], serverInfo_D.get(val['name']))) except KeyError: print "expected parameter not provided - "+val["name"]+" is missing" exit(0) CODE 2: try: urlParams += "%s=%s&"%(val['name'], data.get(val['name'], serverInfo_D[val['name']])) except KeyError: print "expected parameter not provided - "+val["name"]+" is missing" exit(0) see the diffrence in serverInfo_D[val['name']] & serverInfo_D.get(val['name']) code 2 fails but code 1 works the data serverInfo_D:{'user': 'usr', 'pass': 'pass'} data: {'par1': 9995, 'extraparam1': 22} val: {'par1','user','pass','extraparam1'} exception are raised for for data dict .. and all code in for loop which iterates over val

    Read the article

  • Django models & Python class attributes

    - by Geo
    The tutorial on the django website shows this code for the models: from django.db import models class Poll(models.Model): question = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Choice(models.Model): poll = models.ForeignKey(Poll) choice = models.CharField(max_length=200) votes = models.IntegerField() Now, each of those attribute, is a class attribute, right? So, the same attribute should be shared by all instances of the class. A bit later, they present this code: class Poll(models.Model): # ... def __unicode__(self): return self.question class Choice(models.Model): # ... def __unicode__(self): return self.choice How did they turn from class attributes into instance attributes? Did I get class attributes wrong?

    Read the article

  • Python/Django Concatenate a string depending on whether that string exists

    - by Douglas Meehan
    I'm creating a property on a Django model called "address". I want address to consist of the concatenation of a number of fields I have on my model. The problem is that not all instances of this model will have values for all of these fields. So, I want to concatenate only those fields that have values. What is the best/most Pythonic way to do this? Here are the relevant fields from the model: house = models.IntegerField('House Number', null=True, blank=True) suf = models.CharField('House Number Suffix', max_length=1, null=True, blank=True) unit = models.CharField('Address Unit', max_length=7, null=True, blank=True) stex = models.IntegerField('Address Extention', null=True, blank=True) stdir = models.CharField('Street Direction', max_length=254, null=True, blank=True) stnam = models.CharField('Street Name', max_length=30, null=True, blank=True) stdes = models.CharField('Street Designation', max_length=3, null=True, blank=True) stdessuf = models.CharField('Street Designation Suffix',max_length=1, null=True, blank=True) I could just do something like this: def _get_address(self): return "%s %s %s %s %s %s %s %s" % (self.house, self.suf, self.unit, self.stex, self.stdir, self.stname, self.stdes, self.stdessuf) but then there would be extra blank spaces in the result. I could do a series of if statements and concatenate within each, but that seems ugly. What's the best way to handle this situation? Thanks.

    Read the article

  • Extra characters Extracted with XPath and Python (html)

    - by Nacari
    I have been using XPath with scrapy to extract text from html tags online, but when I do I get extra characters attached. An example is trying to extract a number, like "204" from a <td> tag and getting [u'204']. In some cases its much worse. For instance trying to extract "1 - Mathoverflow" and instead getting [u'\r\n\t\t 1 \u2013 MathOverflow\r\n\t\t ']. Is there a way to prevent this, or trim the strings so that the extra characters arent a part of the string? (using items to store the data). It looks like it has something to do with formatting, so how do I get xpath to not pick up that stuff?

    Read the article

  • Python: Unpack arbitary length bits for database storage

    - by sberry2A
    I have a binary data format consisting of 18,000+ packed int64s, ints, shorts, bytes and chars. The data is packed to minimize it's size, so they don't always use byte sized chunks. For example, a number whose min and max value are 31, 32 respectively might be stored with a single bit where the actual value is bitvalue + min, so 0 is 31 and 1 is 32. I am looking for the most efficient way to unpack all of these for subsequent processing and database storage. Right now I am able to read any value by using either struct.unpack, or BitBuffer. I use struct.unpack for any data that starts on a bit where (bit-offset % 8 == 0 and data-length % 8 == 0) and I use BitBuffer for anything else. I know the offset and size of every packed piece of data, so what is going to be the fasted way to completely unpack them? Many thanks.

    Read the article

  • Sorting Python list based on the length of the string

    - by prosseek
    I want to sort a list of strings based on the string length. I tried to use sort as follows, but it doesn't seem to give me correct result. xs = ['dddd','a','bb','ccc'] print xs xs.sort(lambda x,y: len(x) < len(y)) print xs ['dddd', 'a', 'bb', 'ccc'] ['dddd', 'a', 'bb', 'ccc'] What might be wrong?

    Read the article

  • python: strip comma end of string

    - by krisdigitx
    how do i strip comma from the end of an string, i tried awk = subprocess.Popen([r"awk", "{print $10}"], stdin=subprocess.PIPE) awk_stdin = awk.communicate(uptime_stdout)[0] print awk_stdin temp = awk_stdin t = temp.strip(",") also tried t = temp.rstrip(","), both don't work.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >