Search Results

Search found 14486 results on 580 pages for 'python idle'.

Page 162/580 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Exporting dates properly formatted on Google Appengine in Python

    - by Chris M
    I think this is right but google appengine seems to get to a certain point and cop-out; Firstly is this code actually right; and secondly is there away to skip the record if it cant output (like an ignore errors and continue)? class TrackerExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'SearchRec', [('__key__', lambda key:key.name(), None), ('WebSite', str, None), ('DateStamp', lambda x: datetime.datetime.strptime(x, '%d-%m-%Y').date(), None), ('IP', str, None), ('UserAgent', str, None)]) Thanks

    Read the article

  • filtering elements from list of lists in Python?

    - by user248237
    I want to filter elements from a list of lists, and iterate over the elements of each element using a lambda. For example, given the list: a = [[1,2,3],[4,5,6]] suppose that I want to keep only elements where the sum of the list is greater than N. I tried writing: filter(lambda x, y, z: x + y + z >= N, a) but I get the error: <lambda>() takes exactly 3 arguments (1 given) How can I iterate while assigning values of each element to x, y, and z? Something like zip, but for arbitrarily long lists. thanks, p.s. I know I can write this using: filter(lambda x: sum(x)..., a) but that's not the point, imagine that these were not numbers but arbitrary elements and I wanted to assign their values to variable names.

    Read the article

  • How to print a dictionary in python c api function

    - by dizgam
    PyObject* dict = PyDict_New(); PyDict_SetItem(dict, key, value); PyDict_GetItem(dict, key); Bus error if i use getitem function otherwise not. So Want to confirm that the dictionary has the same values which i have set. Other than using PyDict_GetItem function, Is there any other method to print the values of the dictionary?

    Read the article

  • Python: find <title>

    - by Peter
    I have this: response = urllib2.urlopen(url) html = response.read() begin = html.find('<title>') end = html.find('</title>',begin) title = html[begin+len('<title>'):end].strip() if the url = http://www.google.com then the title have no problem as "Google", but if the url = "http://www.britishcouncil.org/learning-english-gateway" then the title become "<!doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <base href="http://www.britishcouncil.org/" /> <META http-equiv="Content-Type" Content="text/html;charset=utf-8"> <meta name="WT.sp" content="Learning;Home Page Smart View" /> <meta name="WT.cg_n" content="Learn English Gateway" /> <META NAME="DCS.dcsuri" CONTENT="/learning-english-gateway.htm">..." What is actually happening, why I couldn't return the "title"?

    Read the article

  • Rectangle Rotation in Python/Pygame

    - by mramazingguy
    Hey I'm trying to rotate a rectangle around its center and when I try to rotate the rectangle, it moves up and to the left at the same time. Does anyone have any ideas on how to fix this? def rotatePoint(self, angle, point, origin): sinT = sin(radians(angle)) cosT = cos(radians(angle)) return (origin[0] + (cosT * (point[0] - origin[0]) - sinT * (point[1] - origin[1])), origin[1] + (sinT * (point[0] - origin[0]) + cosT * (point[1] - origin[1]))) def rotateRect(self, degrees): center = (self.collideRect.centerx, self.collideRect.centery) self.collideRect.topleft = self.rotatePoint(degrees, self.collideRect.topleft, center) self.collideRect.topright = self.rotatePoint(degrees, self.collideRect.topright, center) self.collideRect.bottomleft = self.rotatePoint(degrees, self.collideRect.bottomleft, center) self.collideRect.bottomright = self.rotatePoint(degrees, self.collideRect.bottomright, center)

    Read the article

  • Dynamically adding @property in python

    - by rz
    I know that I can dynamically add an instance method to an object by doing something like: import types def my_method(self): # logic of method # ... # instance is some instance of some class instance.my_method = types.MethodType(my_method, instance) Later on I can call instance.my_method() and self will be bound correctly and everything works. Now, my question: how to do the exact same thing to obtain the behavior that decorating the new method with @property would give? I would guess something like: instance.my_method = types.MethodType(my_method, instance) instance.my_method = property(instance.my_method) But, doing that instance.my_method returns a property object.

    Read the article

  • Efficient way in Python to remove an element from a comma-separated string

    - by ensnare
    I'm looking for the most efficient way to add an element to a comma-separated string while maintaining alphabetical order for the words: For example: string = 'Apples, Bananas, Grapes, Oranges' subtraction = 'Bananas' result = 'Apples, Grapes, Oranges' Also, a way to do this but while maintaining IDs: string = '1:Apples, 4:Bananas, 6:Grapes, 23:Oranges' subtraction = '4:Bananas' result = '1:Apples, 6:Grapes, 23:Oranges' Sample code is greatly appreciated. Thank you so much.

    Read the article

  • python feedparser with yahoo weather rss

    - by mudder
    I'm trying to use feedparser to get some data from yahoos weather rss. It looks like feed parser strips out the yweather namespace data: http://weather.yahooapis.com/forecastrss?w=24260013&u=c <yweather:condition text="Fair" code="34" temp="23" date="Wed, 19 May 2010 5:55 pm EDT" /> looks like feedparser is completely ignoring that. is there away to get it?

    Read the article

  • how to use @ in python.. and the @property

    - by zjm1126
    this is my code: def a(): print 'sss' @a() def b(): print 'aaa' b() and the Traceback is: sss Traceback (most recent call last): File "D:\zjm_code\a.py", line 8, in <module> @a() TypeError: 'NoneType' object is not callable so how to use the '@' thanks updated class a: @property def b(x): print 'sss' aa=a() print aa.b it print : sss None how to use @property thanks

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Python 3.1 - Memory Error during sampling of a large list

    - by jimy
    The input list can be more than 1 million numbers. When I run the following code with smaller 'repeats', its fine; def sample(x): length = 1000000 new_array = random.sample((list(x)),length) return (new_array) def repeat_sample(x): i = 0 repeats = 100 list_of_samples = [] for i in range(repeats): list_of_samples.append(sample(x)) return(list_of_samples) repeat_sample(large_array) However, using high repeats such as the 100 above, results in MemoryError. Traceback is as follows; Traceback (most recent call last): File "C:\Python31\rnd.py", line 221, in <module> STORED_REPEAT_SAMPLE = repeat_sample(STORED_ARRAY) File "C:\Python31\rnd.py", line 129, in repeat_sample list_of_samples.append(sample(x)) File "C:\Python31\rnd.py", line 121, in sample new_array = random.sample((list(x)),length) File "C:\Python31\lib\random.py", line 309, in sample result = [None] * k MemoryError I am assuming I'm running out of memory. I do not know how to get around this problem. Thank you for your time!

    Read the article

  • Python module being reloaded for each request with django and mod_wsgi

    - by Vishal
    I have a variable in init of a module which get loaded from the database and takes about 15 seconds. For django development server everything is working fine but looks like with apache2 and mod_wsgi the module is loaded with every request (taking 15 seconds). Any idea about this behavior? Update: I have enabled daemon mode in mod wsgi, looks like its not reloading the modules now! needs more testing and I will update.

    Read the article

  • has any simply way to delete a value in list of python

    - by zjm1126
    a=[1,2,3,4] b=a.index(6) del a[b] print a it show error: Traceback (most recent call last): File "D:\zjm_code\a.py", line 6, in <module> b=a.index(6) ValueError: list.index(x): x not in list so i have to do this : a=[1,2,3,4] try: b=a.index(6) del a[b] except: pass print a but this is not simple,has any simply way ? thanks

    Read the article

  • Unique elements of list within list in python

    - by user2901061
    We are given a list of animals in different zoos and need to find which zoos have animals that are not in any others. The animals of each zoo are separated by spaces, and each zoo is originally separated by a comma. I am currently enumerating over all of the zoos to split each animal and create lists within lists for different zoos as such: for i, zoo in enumerate(zoos): zoos[i] = zoo.split() However, I then do not know how to tell and count how many of the zoos have unique animals. I figure it is something else with enumerate and possibly sets, but cannot get it down exactly. Any help is greatly appreciated. Thanks

    Read the article

  • Python continue from the point where exception was thrown

    - by James Lin
    Hi is there a way to continue from the point where exception was thrown? eg I have the following psudo code unique code 1 unique code 2 unique code 3 if I want to ignore the exceptions of any of the unique code statements I will have to do it like this: try: #unique code 1 except: pass try: #unique code 2 except: pass try: #unique code 3 except: pass but this isn't elegant to me, and for the life of me I can't remember how I resolved this kind of problem last time... what I want to have is something like try: unique code 1 unique code 2 unique code 3 except: continue from last exception raised

    Read the article

  • Implement loops for python 3

    - by Alex
    Implement this loop: total up the product of the numbers from 1 to x. Implement this loop: total up the product of the numbers from a to b. Implement this loop: total up the sum of the numbers from a to b. Implement this loop: total up the sum of the numbers from 1 to x. Implement this loop: count the number of characters in a string s. i'm very lost on implementing loops these are just some examples that i am having trouble with-- if someone could help me understand how to do them that would be awesome

    Read the article

  • how to send some data to the Thread module on python and google-map-engine

    - by zjm1126
    from google.appengine.ext import db class Log(db.Model): content = db.StringProperty(multiline=True) class MyThread(threading.Thread): def run(self,request): #logs_query = Log.all().order('-date') #logs = logs_query.fetch(3) log=Log() log.content=request.POST.get('content',None) log.put() def Log(request): thr = MyThread() thr.start(request) return HttpResponse('') error is : Exception in thread Thread-1: Traceback (most recent call last): File "D:\Python25\lib\threading.py", line 486, in __bootstrap_inner self.run() File "D:\zjm_code\helloworld\views.py", line 33, in run log.content=request.POST.get('content',None) NameError: global name 'request' is not defined

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • PYTHON: Look for match in a nested list

    - by elfuego1
    Hello everybody, I have two nested lists of different sizes: A = [[1, 7, 3, 5], [5, 5, 14, 10]] B = [[1, 17, 3, 5], [1487, 34, 14, 74], [1487, 34, 3, 87], [141, 25, 14, 10]] I'd like to gather all nested lists from list B if A[2:4] == B[2:4] and put it into list L: L = [[1, 17, 3, 5], [141, 25, 14, 10]] Would you help me with this?

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >