Search Results

Search found 13815 results on 553 pages for 'gae python'.

Page 154/553 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Python: import the containing package

    - by guy
    In a module residing inside a package, i have the need to use a function defined within the __init__.py of that package. how can i import the package within the module that resides within the package, so i can use that function? Importing __init__ inside the module will not import the package, but instead a module named __init__, leading to two copies of things with different names... Is there a pythonic way to do this?

    Read the article

  • Python datetime to Unix timestamp

    - by Off Rhoden
    I have to create an "Expires" value 5 minutes in the future, but I have to supply it in UNIX Timestamp format. I have this so far, but it seems like a hack. def expires(): '''return a UNIX style timestamp representing 5 minutes from now''' epoch = datetime.datetime(1970, 1, 1) seconds_in_a_day = 60 * 60 * 24 five_minutes = datetime.timedelta(seconds=5*60) five_minutes_from_now = datetime.datetime.now() + five_minutes since_epoch = five_minutes_from_now - epoch return since_epoch.days * seconds_in_a_day + since_epoch.seconds Is there a module or function that does the timestamp conversion for me?

    Read the article

  • Algorithm detect repeating/similiar strings in a corpus of data -- say email subjects, in Python

    - by RizwanK
    I'm downloading a long list of my email subject lines , with the intent of finding email lists that I was a member of years ago, and would want to purge them from my Gmail account (which is getting pretty slow.) I'm specifically thinking of newsletters that often come from the same address, and repeat the product/service/group's name in the subject. I'm aware that I could search/sort by the common occurrence of items from a particular email address (and I intend to), but I'd like to correlate that data with repeating subject lines.... Now, many subject lines would fail a string match, but "Google Friends : Our latest news" "Google Friends : What we're doing today" are more similar to each other than a random subject line, as is: "Virgin Airlines has a great sale today" "Take a flight with Virgin Airlines" So -- how can I start to automagically extract trends/examples of strings that may be more similar. Approaches I've considered and discarded ('because there must be some better way'): Extracting all the possible substrings and ordering them by how often they show up, and manually selecting relevant ones Stripping off the first word or two and then count the occurrence of each sub string Comparing Levenshtein distance between entries Some sort of string similarity index ... Most of these were rejected for massive inefficiency or likelyhood of a vast amount of manual intervention required. I guess I need some sort of fuzzy string matching..? In the end, I can think of kludgy ways of doing this, but I'm looking for something more generic so I've added to my set of tools rather than special casing for this data set. After this, I'd be matching the occurring of particular subject strings with 'From' addresses - I'm not sure if there's a good way of building a data structure that represents how likely/not two messages are part of the 'same email list' or by filtering all my email subjects/from addresses into pools of likely 'related' emails and not -- but that's a problem to solve after this one. Any guidance would be appreciated.

    Read the article

  • Python: For loop problem

    - by Yasmin
    I have a simple for loop problem, when i run the code below it prints out series of 'blue green' sequences then a series of 'green' sequences. I want the output to be; if row[4] is equal to 1 to print blue else print green. for row in rows: for i in `row[4]`: if i ==`1`: print 'blue ' else: print 'green ' Any help would be grateful thanks Yas

    Read the article

  • Python GUI does not update until entire process is finished

    - by ccwhite1
    I have a process that gets a files from a directory and puts them in a list. It then iterates that list in a loop. The last line of the loop being where it should update my gui display, then it begins the loop again with the next item in the list. My problem is that it does not actually update the gui until the entire process is complete, which depending on the size of the list could be 30 seconds to over a minute. This gives the feeling of the program being 'hung' What I wanted it to do was to process one line in the list, update the gui and then continue. Where did I go wrong? The line to update the list is # Populate listview with drive contents. The print statements are just for debug. def populateList(self): print "populateList" sSource = self.txSource.Value sDest = self.txDest.Value # re-intialize listview and validated list self.listView1.DeleteAllItems() self.validatedMove = None self.validatedMove = [] #Create list of files listOfFiles = getList(sSource) #prompt if no files detected if listOfFiles == []: self.lvActions.Append([datetime.datetime.now(),"Parse Source for .MP3 files","No .MP3 files in source directory"]) #Populate list after both Source and Dest are chosen if len(sDest) > 1 and len(sDest) > 1: print "-iterate listOfFiles" for file in listOfFiles: sFilename = os.path.basename(file) sTitle = getTitle(file) sArtist = getArtist(file) sAlbum = getAblum(file) # Make path = sDest + Artist + Album sDestDir = os.path.join (sDest, sArtist) sDestDir = os.path.join (sDestDir, sAlbum) #If file exists change destination to *.copyX.mp3 sDestDir = self.defineDestFilename(os.path.join(sDestDir,sFilename)) # Populate listview with drive contents self.listView1.Append([sFilename,sTitle,sArtist,sAlbum,sDestDir]) #populate list to later use in move command self.validatedMove.append([file,sDestDir]) print "-item added to SourceDest list" else: print "-list not iterated"

    Read the article

  • Match HTML tags in two strings using regex in Python

    - by jack
    I want to verify that the HTML tags present in a source string are also present in a target string. For example: >> source = '<em>Hello</em><label>What's your name</label>' >> verify_target(’<em>Hi</em><label>My name is Jim</label>') True >> verify_target('<label>My name is Jim</label><em>Hi</em>') True >> verify_target('<em>Hi<label>My name is Jim</label></em>') False

    Read the article

  • Small Python optional arguments question

    - by ooboo
    I have two functions: def f(a,b,c=g(b)): blabla def g(n): blabla c is an optional argument in function f. If the user does not specify its value, the program should compute g(b) and that would be the value of c. But the code does not compile - it says name 'b' is not defined. How to fix that? Someone suggested: def g(b): blabla def f(a,b,c=None): if c is None: c = g(b) blabla But this doesn't work, because maybe the user intended c to be None and then c will have another value.

    Read the article

  • How do I calculate percentiles with python/numpy?

    - by Uri
    Is there a convenient way to calculate percentiles for a sequence or single-dimensional numpy array? I am looking for something similar to Excel's percentile function. I looked in NumPy's statistics reference, and couldn't find this. All I could find is the median (50th percentile), but not something more specific.

    Read the article

  • Python File Search Line And Return Specific Number of Lines after Match

    - by Simos Anderson
    I have a text file that has lines representing some data sets. The file itself is fairly long but it contains certain sections of the following format: Series_Name INFO Number of teams : n1 | Team | # | wins | | TeamName1 | x | y | . . . | TeamNamen1 | numn | numn | Some Irrelevant lines Series_Name2 INFO Number of teams : n1 | Team | # | wins | | TeamName1 | num1 | num2 | . where each section has a header that begins with the Series_Name. Each Series_Name is different. The line with the header also includes the number of teams in that series, n1. Following the header line is a set of lines that represents a table of data. For each series there are n1+1 rows in the table, where each row shows an individual team name and associated stats. I have been trying to implement a function that will allow the user to search for a Team name and then print out the line in the table associated with that team. However, certain team names show up under multiple series. To resolve this, I am currently trying to write my code so that the user can search for the header line with series name first and then print out just the following n1+1 lines that represent the data associated with the series. Here's what I have come up with so far: import re print fname = raw_input("Enter filename: ") seriesname = raw_input("Enter series: ") def findcounter(fname, seriesname): logfile = open(fname, "r") pat = 'INFO Number of teams :' for line in logfile: if seriesname in line: if pat in line: s=line pattern = re.compile(r"""(?P<name>.*?) #starting name \s*INFO #whitespace and success \s*Number\s*of\s*teams #whitespace and strings \s*\:\s*(?P<n1>.*)""",re.VERBOSE) match = pattern.match(s) name = match.group("name") n1 = int(match.group("n1")) print name + " has " + str(n1) + " teams" lcount = 0 for line in logfile: if line.startswith(name): if pat in line: while lcount <= n1: s.append(line) lcount += 1 return result The first part of my code works; it matches the header line that the person searches for, parses the line, and then prints out how many teams are in that series. Since the header line basically tells me how many lines are in the table, I thought that I could use that information to construct a loop that would continue printing each line until a set counter reached n1. But I've tried running it, and I realize that the way I've set it up so far isn't correct. So here's my question: How do you return a number of lines after a matched line when given the number of desired lines that follow the match? I'm new to programming, and I apologize if this question seems silly. I have been working on this quite diligently with no luck and would appreciate any help.

    Read the article

  • Best practice for Python Assert

    - by meade
    Is there a performance or code maintenance issue with using assert as part of the standard code instead of using it just for debugging purposes? Is assert x >= 0, 'x is less then zero' and better or worse then if x < 0: raise Exception, 'x is less then zero' Also, is there anyway to set a business rule like if x < 0 raise error that is always checked with out the try, except, finally so, if at anytime throughout the code x is < 0 an error is raised, like if you set assert x < 0 at the start of a function, anywhere within the function where x becomes less then 0 an exception is raised?

    Read the article

  • Redirect print in Python: val = print(arg) to output mixed iterable to file

    - by emcee
    So lets say I have an incredibly nested iterable of lists/dictionaries. I would like to print them to a file as easily as possible. Why can't I just redirect print to a file? val = print(arg) gets a SyntaxError. Is there a way to access stdinput? And why does print take forever with massive strings? Bad programming on my side for outputting massive strings, but quick debugging--and isn't that leveraging the strength of an interactive prompt? There's probably also an easier way than my gripe. Has the hive-mind an answer?

    Read the article

  • python iterators and thread-safety

    - by Igor
    I have a class which is being operated on by two functions. One function creates a list of widgets and writes it into the class: def updateWidgets(self): widgets = self.generateWidgetList() self.widgets = widgets the other function deals with the widgets in some way: def workOnWidgets(self): for widget in self.widgets: self.workOnWidget(widget) each of these functions runs in it's own thread. the question is, what happens if the updateWidgets() thread executes while the workOnWidgets() thread is running? I am assuming that the iterator created as part of the for...in loop will keep some kind of reference to the old self.widgets object? So I will finish iterating over the old list... but I'd love to know for sure.

    Read the article

  • Plotting 3D Polygons in python-matplotlib

    - by Developer
    I was unsuccessful browsing web for a solution for the following simple question: How to draw 3D polygon (say a filled rectangle or triangle) using vertices values? I have tried many ideas but all failed, see: from mpl_toolkits.mplot3d import Axes3D from matplotlib.collections import PolyCollection import matplotlib.pyplot as plt fig = plt.figure() ax = Axes3D(fig) x = [0,1,1,0] y = [0,0,1,1] z = [0,1,0,1] verts = [zip(x, y,z)] ax.add_collection3d(PolyCollection(verts),zs=z) plt.show() I appreciate in advance any idea/comment. Updates based on the accepted answer: import mpl_toolkits.mplot3d as a3 import matplotlib.colors as colors import pylab as pl import scipy as sp ax = a3.Axes3D(pl.figure()) for i in range(10000): vtx = sp.rand(3,3) tri = a3.art3d.Poly3DCollection([vtx]) tri.set_color(colors.rgb2hex(sp.rand(3))) tri.set_edgecolor('k') ax.add_collection3d(tri) pl.show() Here is the result:

    Read the article

  • Python modify an xml file

    - by michele
    I have this xml model. link text So I have to add some node (see the text commented) to this file. How I can do it? I have writed this partial code but it doesn't work: xmldoc=minidom.parse(directory) child = xmldoc.createElement("map") for node in xmldoc.getElementsByTagName("Environment"): node.appendChild(child) Thanks in advance.

    Read the article

  • How to loop over nodes with xmlfeed using scrapy python

    - by Kour ipm
    Hi i working on scrapy and trying xml feeds first time, below is my code class TestxmlItemSpider(XMLFeedSpider): name = "TestxmlItem" allowed_domains = {"http://www.nasinteractive.com"} start_urls = [ "http://www.nasinteractive.com/jobexport/advance/hcantexasexport.xml" ] iterator = 'iternodes' itertag = 'job' def parse_node(self, response, node): title = node.select('title/text()').extract() job_code = node.select('job-code/text()').extract() detail_url = node.select('detail-url/text()').extract() category = node.select('job-category/text()').extract() print title,";;;;;;;;;;;;;;;;;;;;;" print job_code,";;;;;;;;;;;;;;;;;;;;;" item = TestxmlItem() item['title'] = node.select('title/text()').extract() ....... return item result: File "/usr/lib/python2.7/site-packages/Scrapy-0.14.3-py2.7.egg/scrapy/item.py", line 56, in __setitem__ (self.__class__.__name__, key)) exceptions.KeyError: 'TestxmlItem does not support field: title' Totally there are 200+ items so i need to loop over and assign the node text to item but here all the results are displaying at once when we print, actually how can we loop over on nodes in scraping xml files with xmlfeedspider

    Read the article

  • Get Path of Uploaded File using Python

    - by Ali
    Is it possible to get the full path of the file on the user's computer being uploaded to my site? Using os.path.abspath(fileitem.filename) simply gets me the address of where my script is executing from on my shared hosting server. FYI: fileitem = form['file'] and form = cgi.FieldStorage()

    Read the article

  • Parsing text file in python

    - by Ockonal
    Hello, I have html-file. I have to replace all text between this: [%anytext%]. As I understand, it's very easy to do with BeautifulSoup for parsing hmtl. But what is regular expression and how to remove&write back text data?

    Read the article

  • Using a regex to match IP addresses in Python

    - by MHibbin
    I'm trying to make a test for checking whether a sys.argv input matches the regex for an IP address... As a simple test, I have the following... import re pat = re.compile("\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}") test = pat.match(hostIP) if test: print "Acceptable ip address" else: print "Unacceptable ip address" However when I pass random values into it, it returns "Acceptable ip address" in most cases, except when I have an "address" that is basically equivalent to \d+ Any thoughts welcome. Cheers Matt

    Read the article

  • How to read a file with variable multi-row data in Python

    - by dr.bunsen
    I have a file that is about 100Mb that looks like this: #meta data 1 skadjflaskdjfasljdfalskdjfl sdkfjhasdlkgjhsdlkjghlaskdj asdhfk #meta data 2 jflaksdjflaksjdflkjasdlfjas ldaksjflkdsajlkdfj #meta data 3 alsdkjflasdjkfglalaskdjf This file contains one row of meta data that corresponds to several, variable length data containing only alpha-numeric characters. What is the best way to read this data into a simple list like this: data = [[#meta data 1, skadjflaskdjfasljdfalskdjflsdkfjhasdlkgjhsdlkjghlaskdjasdhfk], [#meta data 2, jflaksdjflaksjdflkjasdlfjasldaksjflkdsajlkdfj], [#meta data 3, alsdkjflasdjkfglalaskdjf]] My initial idea was to use the read() method to read the whole file into memory and then use regular expressions to parse the data into the desired format. Is there a better more pythonic way? All metadata lines start with an octothorpe and all data lines are all alpha-numeric. Thanks!

    Read the article

  • Access class instance "name" dynamically in Python

    - by user328317
    In plain english: I am creating class instances dynamically in a for loop, the class then defines a few attributes for the instance. I need to later be able to look up those values in another for loop. Sample code: class A: def init(self, name, attr): self.name=name self.attr=attr names=("a1", "a2", "a3") x=10 for name in names: name=A(name, x) x += 1 ... ... ... for name in names: print name.attr How can I create an identifier for these instances so they can be accessed later on by "name"? I've figured a way to get this by associating "name" with the memory location: class A: instances=[] names=[] def init(self, name, attr): self.name=name self.attr=attr A.instances.append(self) A.names.append(name) names=("a1", "a2", "a3") x=10 for name in names: name=A(name, x) x += 1 ... ... ... for name in names: index=A.names.index(name) print "name: " + name print "att: " + str(A.instances[index].att) This has had me scouring the web for 2 days now, and I have not been able to find an answer. Maybe I don't know how to ask the question properly, or maybe it can't be done (as many other posts seemed to be suggesting). Now this 2nd example works, and for now I will use it. I'm just thinking there has to be an easier way than creating your own makeshift dictionary of index numbers and I'm hoping I didn't waste 2 days looking for an answer that doesn't exist. Anyone have anything? Thanks in advance, Andy Update: A coworker just showed me what he thinks is the simplest way and that is to make an actual dictionary of class instances using the instance "name" as the key.

    Read the article

  • Python: list and string matching

    - by pete
    Hi All, I have following: temp = "aaaab123xyz@+" lists = ["abc", "123.35", "xyz", "AND+"] for list in lists if re.match(list, temp, re.I): print "The %s is within %s." % (list,temp) The re.match is only match the beginning of the string, How to I match substring in between too.

    Read the article

  • How to control a subthread process in python?

    - by SpawnCxy
    Code first: '''this is main structure of my program''' from twisted.web import http from twisted.protocols import basic import threading threadstop = False #thread trigger,to be done class MyThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.start() def run(self): while True: if threadstop: return dosomething() '''def some function''' if __name__ == '__main__': from twisted.internet import reactor t = MyThread() reactor.listenTCP(serverport,myHttpFactory()) reactor.run() As my first multithread program,I feel happy that it works as expected.But now I find I cannot control it.If I run it on front,Control+C can only stop the main process,and I can still find it in processlist;if I run it in background,I have to use kill -9 pid to stop it.And I wonder if there's a way to control the subthread process by a trigger variale,or a better way to stop the whole process other than kill -9.Thanks.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >