Search Results

Search found 13693 results on 548 pages for 'python metaprogramming'.

Page 139/548 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • [Python]Xml add a node from another xml document

    - by michele
    Hi, I have two xml file: 1)model.xml 2)projectionParametersTemplate.xml I want to extract from 1) Algorithm Node with his child and put it in 2) I have wrote this code but it doesn't function. from xml.dom.minidom import Document from xml.dom import minidom xmlmodel=minidom.parse("/home/michele/Scrivania/d/model.xml") xmltemplate=minidom.parse("/home/michele/Scrivania/d/projectionParametersTemplate.xml") for Node in xmlmodel.getElementsByTagName("Algorithm"): print "\nNode: "+str(Node) for Node2 in xmltemplate.getElementsByTagName("ProjectionParameters"): print "\nNode2: "+str(Node2) Node2.appendChild(Node) This is model.xml link text This is projectionParametersTemplate.xml link text Thanks a lot.

    Read the article

  • Multiple range product in Python

    - by Tyr
    Is there a better way to do this: perms = product(range(1,7),range(1,7),range(1,7)) so that I can choose how many ranges I use? I want it to be equivalent to this, but scalable. def dice(num) if num == 1: perms = ((i,) for i in range(1,7)) elif num == 2: perms = product(range(1,7),range(1,7)) elif num == 3: perms = product(range(1,7),range(1,7),range(1,7)) #... and so on but I know there has to be a better way. I'm using it for counting dice outcomes. The actual code def dice(selection= lambda d: d[2]): perms = itertools.product(range(1,7),range(1,7),range(1,7)) return collections.Counter(((selection(sorted(i)) for i in perms))) where I can call it with a variety of selectors, like sum(d[0:2]) for the sum of the lowest 2 dice or d[1] to get the middle dice.

    Read the article

  • Python regex help

    - by Dormish
    I am trying to make a regex that finds all names, url and phone numbers in an html page. But I'm having trouble with the phone number part. I think the problem with the numbers part is that is searches until it finds the </strong> but in that process it skips people, instead of making a empty string if the person has no phone number ( simply put instead of a list like this: url1+name1+num1 | url2+name2+"" | url3+name3+num3 it returns a list like this: url1+name1+num1 | url2+name2+num3 , with url3+name3 deleted in the process) for url, name, pnumber in re.findall('Name"><div>(?:<a href="/si([^">]*)"> )?([^<]*)(?:.*?</strong>([^<]*))?',page): I am searchin for people in s single very long line. A person could have an url or phone number. An example of a person with an url and a phone number <tr> <td class="lablinksName"><div><a href="/si/ivan-bratko/default.html"> dr. Ivan Bratko akad. prof.</a></div></td> <td class="lablinksMail"><a href="javascript:void(cmPopup('sendMessage', '/si/ivan-bratko/mailer.html', true, 350, 350));"><img src="/Static/images/gui/mail.gif" height="8" width="11"></a></td> <td class="lablinksPhone"><div><strong>T:</strong> +386 1 4768 393 </div></td> </tr> And an example of a person with no url or phone number <tr> <td class="lablinksName"><div> dr. Branko Matjaž Juric prof.</div></td> <td class="lablinksMail"><a href="javascript:void(cmPopup('sendMessage', '/si/branko-matjaz-juric/mailer.html', true, 350, 350));"><img src="/Static/images/gui/mail.gif" height="8" width="11"></a></td> <td class="lablinksPhone"><div> </div></td> </tr> I hope i was clear enough and if any one can help me.

    Read the article

  • Python slicing a string using space characters and a maximum length

    - by chrism
    I'd like to slice a string up in a similar way to .split() (so resulting in a list) but in a more intelligent way: I'd like it to split it into chunks that are up to 15 characters, but are not split mid word so: string = 'A string with words' [splitting process takes place] list = ('A string with','words') The string in this example is split between 'with' and 'words' because that's the last place you can split it and the first bit be 15 characters or less.

    Read the article

  • Downloading RSS using python

    - by Vojtech R.
    Hi, I have list of 200 rss feeds, which I have to downloading. It's continuous process - I have to download every post, nothing can be missing, but also no duplicates. So best practice should be remember last update of feed and control it for change in x-hour interval? And how to handle if downloader will be restarted? So downloader should remember, what were downloaded and dont download it again... It's somewhere implemented yet? Or any tips for article? Thanks

    Read the article

  • Python optimization

    - by Rami Jarrar
    Hi, I do like this: f = open('wl4.txt', 'w') hh = 0 ###################################### for n in range(1,5): for l in range(33,127): if n==1: b = chr(l) + '\n' f.write(b) hh += 1 elif n==2: for s0 in range(33, 127): b = chr(l) + chr(s0) + '\n' f.write(b) hh += 1 elif n==3: for s0 in range(33, 127): for s1 in range(33, 127): b = chr(l) + chr(s0) + chr(s1) + '\n' f.write(b) hh += 1 elif n==4: for s0 in range(33, 127): for s1 in range(33, 127): for s2 in range(33,127): b = chr(l) + chr(s0) + chr(s1) + chr(s2) + '\n' f.write(b) hh += 1 ###################################### print "We Made %d Words." %(hh) ###################################### f.close() So, is there any method to make it faster?

    Read the article

  • Python mistaking float for string

    - by wrongusername
    I receive TypeError: Can't convert 'float' object to str implicitly while using Gambler.pot += round(self.bet + self.money * 0.1) where pot, bet, and money are all doubles (or at least are supposed to be). I'm not sure if this is yet another Eclipse thing, but how do I get the line to compile? Code where bet and money are initialized: money = 0 bet = 0

    Read the article

  • finding the missing values in a range using any scripting language - perl, python or shell script

    - by manu
    Hi everyone I got stuck in one problem of finding the missing values in a range and the range is also variable for the successive rows. Ex. =================== inpt ==================== 673 673 673 676 676 680 2667 2667 2668 2670 2671 2674 ===================== output should be like this =================== 674 675 677 678 679 2669 2672 2673 ======================== This is just one part and the row values can be more also If there is any clarification plz let me know. thanx in advance manu

    Read the article

  • serializing JSON files with newlines in Python

    - by user248237
    I am using json and jsonpickle sometimes to serialize objects to files, using the following function: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f) f.close() The problem is that if I serialize a dictionary for example, using "json_serialize(mydict, myfilename)" then the entire serialization gets put on one line. This means that I can't grep the file for entries to be inspected by hand, like I would a CSV file. Is there a way to make it so each element of an object (e.g. each entry in a dict, or each element in a list) is placed on a separate line in the JSON output file? thanks.

    Read the article

  • Overlapping matches with finditer() in Python

    - by Raphink
    Hi there, I'm using a regex to match Bible verse references in a text. The current regex is REF_REGEX = re.compile(r'(?<!\w)((?i)q(?:uote)?\s+)?((?:(?:[1-3]|I{1,3})\s*)?[A-Za-z]+)\.?(?:\s*(\d+)(?:[:.](\d+)(?:-(\d+))?)?)(?:\s+(?:(?i)(?:from\s+)|(?:in\s+)|(?P<lbrace>\())\s*(\w+)(?(lbrace)\)))?', re.UNICODE) This matches the following expressions fine: "jn 3:16": (None, 'jn', '3', '16', None, None, None), "matt. 18:21-22": (None, 'matt', '18', '21', '22', None, None), "q matt. 18:21-22": ('q ', 'matt', '18', '21', '22', None, None), "QuOTe jn 3:16": ('QuOTe ', 'jn', '3', '16', None, None, None), "q 1co13:1": ('q ', '1co', '13', '1', None, None, None), "q 1 co 13:1": ('q ', '1 co', '13', '1', None, None, None), "quote 1 co 13:1": ('quote ', '1 co', '13', '1', None, None, None), "quote 1co13:1": ('quote ', '1co', '13', '1', None, None, None), "jean 3:18 (PDV)": (None, 'jean', '3', '18', None, '(', 'PDV'), "quote malachie 1.1-2 fRom Colombe": ('quote ', 'malachie', '1', '1', '2', None, 'Colombe'), "quote malachie 1.1-2 In Colombe": ('quote ', 'malachie', '1', '1', '2', None, 'Colombe'), "cinq jn 3:16 (test)": (None, 'jn', '3', '16', None, '(', 'test'), "Q IIKings5.13-58 from wolof": ('Q ', 'IIKings', '5', '13', '58', None, 'wolof'), "This text is about lv5.4-6 in KJV only": (None, 'lv', '5', '4', '6', None, 'KJV'), but it fails to parse: "Found in 2 Cor. 5:18-21 ( Ministers": (None, '2 Cor', '5', '18', '21', None, None), because it returns (None, 'in', '2', None, None, None, None) instead. Is there a way to get finditer() to return all matches, even if they overlap, or is there a way to improve my regex so it matches this last bit properly? Thanks.

    Read the article

  • Python date time, get date 6 months from now

    - by Eef
    Hey, I am using the datetime module. I am looking to calculate the date 6 months from the current date. Could someone give me a little help doing this? Edit: The reason I am wanting to generate a date 6 months from the current date is to produce a Review Date. If the user enters data into the system it will have a review date of 6 months from the date they entered the data. Does this help? Cheers Eef

    Read the article

  • abstract test case using python unittest

    - by gruszczy
    Is it possible to create an abstract TestCase, that will have some test_* methods, but this TestCase won't be called and those methods will only be used in subclasses? I think I am going to have one abstract TestCase in my test suite and it will be subclassed for a few different implementation of a single interface. This is why all test methods are the some, only one, internal method changes. How can I do it in elegant way?

    Read the article

  • python: sorting

    - by nabizan
    hi im doing a loop so i could get dict of data, but since its a dict it's sorting alphabetical and not as i push it trought the loop ... is it possible to somehow turn off alphabetical sorting? here is how do i do that data = {} for item in container: data[item] = {} ... for key, val in item_container.iteritems(): ... data[item][key] = val whitch give me something like this data = { A : { K1 : V1, K2 : V2, K3 : V3 }, B : { K1 : V1, K2 : V2, K3 : V3 }, C : { K1 : V1, K2 : V2, K3 : V3 } } and i want it to be as i was going throught the loop, e.g. data = { B : {K2 : V2, K3 : V3, K1 : V1}, A : {K1 : V1, K2 : V2, K3 : V3}, C : {K3 : V3, K1 : V1, K2 : V2} }

    Read the article

  • problem plotting on logscale in matplotlib in python

    - by user248237
    I am trying to plot the following numbers on a log scale as a scatter plot in matplotlib. Both the quantities on the x and y axes have very different scales, and one of the variables has a huge dynamic range (nearly 0 to 12 million roughly) while the other is between nearly 0 and 2. I think it might be good to plot both on a log scale. I tried the following, for a subset of the values of the two variables: fig = plt.figure(figsize(8, 8)) ax = fig.add_subplot(1, 1, 1) ax.set_yscale('log') ax.set_xscale('log') plt.scatter([1.341, 0.1034, 0.6076, 1.4278, 0.0374], [0.37, 0.12, 0.22, 0.4, 0.08]) The x-axes appear log scaled but the points do not appear -- only two points appear. Any idea how to fix this? Also, how can I make this log scale appear on a square axes, so that the correlation between the two variables can be interpreted from the scatter plot? thanks.

    Read the article

  • Converting a Doc object into a string in python

    - by Sam
    I'm using minidom to parse through an xml document. I took the data with yum tags and stored them in a list and calculated the frequency of the words. However, its not storing or reading them as strings in the list. Is there another way to do it? Right now this is what I have: yumNodes = [node for node in doc.getElementsByTagName("yum")] for node in yumNodes: yumlist.append(t.data for t in node.childNodes if t.nodeType == t.TEXT_NODE) for ob in yumlist: for o in ob: if word not in freqDict: freqDict[word] = 1 else: freqDict[word] += 1

    Read the article

  • Python datetime not including DST when using pytz timezone

    - by Jesper
    If I convert a UTC datetime to swedish format, summertime is included (CEST). However, while creating a datetime with sweden as the timezone, it gets CET instead of CEST. Why is this? >>> # Modified for readability >>> import pytz >>> import datetime >>> sweden = pytz.timezone('Europe/Stockholm') >>> >>> datetime.datetime(2010, 4, 20, 16, 20, tzinfo=pytz.utc).astimezone(sweden) datetime(2010, 4, 20, 18, 20, tzinfo=<... 'Europe/Stockholm' CEST+2:00:00 DST>) >>> >>> datetime.datetime(2010, 4, 20, 18, 20, tzinfo=sweden) datetime(2010, 4, 20, 18, 20, tzinfo=<... 'Europe/Stockholm' CET+1:00:00 STD>) >>>

    Read the article

  • Python Expand Tabs Length Calculation

    - by Mithrill
    I'm confused by how the length of a string is calculated when expandtabs is used. I thought expandtabs replaces tabs with the appropriate number of spaces (with the default number of spaces per tab being 8). However, when I ran the commands using strings of varying lengths and varying numbers of tabs, the length calculation was different than I thought it would be (i.e., each tab didn't always result in the string length being increased by 8 for each instance of "/t"). Below is a detailed script output with comments explaining what I thought should be the result of the command executed above. Would someone please explain the how the length is calculated when expand tabs is used? IDLE 2.6.5 >>> s = '\t' >>> print len(s) 1 >>> #the length of the string without expandtabs was one (1 tab counted as a single space), as expected. >>> print len(s.expandtabs()) 8 >>> #the length of the string with expandtabs was eight (1 tab counted as eight spaces). >>> s = '\t\t' >>> print len(s) 2 >>> #the length of the string without expandtabs was 2 (2 tabs, each counted as a single space). >>> print len(s.expandtabs()) 16 >>> #the length of the string with expandtabs was 16 (2 tabs counted as 8 spaces each). >>> s = 'abc\tabc' >>> print len(s) 7 >>> #the length of the string without expandtabs was seven (6 characters and 1 tab counted as a single space). >>> print len(s.expandtabs()) 11 >>> #the length of the string with expandtabs was NOT 14 (6 characters and one 8 space tabs). >>> s = 'abc\tabc\tabc' >>> print len(s) 11 >>> #the length of the string without expandtabs was 11 (9 characters and 2 tabs counted as a single space). >>> print len(s.expandtabs()) 19 >>> #the length of the string with expandtabs was NOT 25 (9 characters and two 8 space tabs). >>>

    Read the article

  • Python: load variables in a dict into namespace

    - by celil
    I want to use a bunch of local variables defined in a function, outside of the function, so I am passing x=locals() as a return value. How can I load all the variables defined in that dictionary into the namespace outside the function, so that instead of accessing the value using x['variable'], I could simply use variable.

    Read the article

  • Iterating through String word at a time in Python

    - by AlgoMan
    I have a string buffer of a huge text file. I have to search a given words/phrases in the string buffer. Whats the efficient way to do it ? I tried using re module matches. But As i have a huge text corpus that i have to search through. This is taking large amount of time. Given a Dictionary of words and Phrases. I iterate through the each file, read that into string , search all the words and phrases in the dictionary and increment the count in the dictionary if the keys are found. One small optimization that we thought was to sort the dictionary of phrases/words with the max number of words to lowest. And then compare each word start position from the string buffer and compare the list of words. If one phrase is found, we don search for the other phrases (as it matched the longest phrase ,which is what we want) Can some one suggest how to go about word by word in the string buffer. (Iterate string buffer word by word) ? Also, Is there any other optimization that can be done on this ?

    Read the article

  • python urllib2 thread safety

    - by jldupont
    Is urllib2 thread safe? I find that using urllib2.urlopen sometimes results in the thread issuing the call to stop functioning. Yes I have used the timeout option to no avail. Is there a thread safe HTTP GET functionality I could use? NOTE: I am not interested in using Twisted to solve this problem. I have used Twisted in the past and I love it but this time I need a simpler solution. NOTE2: I also tried httplib with the same result (blocking).

    Read the article

  • Python Socket Getting Connection Reset

    - by Ian
    I created a threaded socket listener that stores newly accepted connections in a queue. The socket threads then read from the queue and respond. For some reason, when doing benchmarking with 'ab' (apache benchmark) using a concurrency of 2 or more, I always get a connection reset before it's able to complete the benchmark (this is taking place locally, so there's no external connection issue). class server: _ip = '' _port = 8888 def __init__(self, ip=None, port=None): if ip is not None: self._ip = ip if port is not None: self._port = port self.server_listener(self._ip, self._port) def now(self): return time.ctime(time.time()) def http_responder(self, conn, addr): httpobj = http_builder() httpobj.header('HTTP/1.1 200 OK') httpobj.header('Content-Type: text/html; charset=UTF-8') httpobj.header('Connection: close') httpobj.body("Everything looks good") data = httpobj.generate() sent = conn.sendall(data) def http_thread(self, id): self.log("THREAD %d: Starting Up..." % id) while True: conn, addr = self.q.get() ip, port = addr self.log("THREAD %d: responding to request: %s:%s - %s" % (id, ip, port, self.now())) self.http_responder(conn, addr) self.q.task_done() conn.close() def server_listener(self, host, port): self.q = Queue.Queue(0) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind( (host, port) ) sock.listen(5) for i in xrange(4): #thread count thread.start_new(self.http_thread, (i+1, )) while True: self.q.put(sock.accept()) sock.close() server('', 9999) When running the benchmark, I get totally random numbers of good requests before it errors out, usually between 4 and 500. Edit: Took me a while to figure it out, but the problem was in sock.listen(5). Because I was using apache benchmark with a higher concurrency (5 and up) it was causing the backlog of connections to pile up, at which point the connections started getting dropped by the socket.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >