Search Results

Search found 18238 results on 730 pages for 'python gui'.

Page 469/730 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Project Euler: problem 8

    - by Marijus
    n = # some ridiculously large number, omitted N = [int(i) for i in str(n)] maxProduct = 0 for i in range(0,len(N)-4): newProduct = 1 is_cons = 0 for j in range(i,i+4): if N[j] == N[j+1] - 1: is_cons += 1 if is_cons == 5: for j in range(i,i+5): newProduct *= N[j] if newProduct > maxProduct: maxProduct = newProduct print maxProduct I've been working on this problem for hours now and I can't get this to work. I've tried doing this algorithm on paper and it works just fine.. Could you give me hints what's wrong ?

    Read the article

  • related to list and file handling?

    - by kaushik
    i have file with contents in list form such as [1,'ab','fgf','ssd'] [2,'eb','ghf','hhsd'] [3,'ag','rtf','ssfdd'] i want to read that file line by line using f.readline and assign thn to a list so as to use it is the prog as a list for using list properties tried like k=[ ] k=f.readline() print k[1] i xpected a result to show 2nd element in the list in first line but it showed the first bit and gave o/p as '1' how to get the xpected output.. please suggest

    Read the article

  • ValueError: setting an array element with a sequence.

    - by MedicalMath
    This code: import numpy as p def firstfunction(): UnFilteredDuringExSummaryOfMeansArray = [] MeanOutputHeader=['TestID','ConditionName','FilterType','RRMean','HRMean','dZdtMaxVoltageMean','BZMean','ZXMean' ,'LVETMean','Z0Mean','StrokeVolumeMean','CardiacOutputMean','VelocityIndexMean'] dataMatrix = BeatByBeatMatrixOfMatrices[column] roughTrimmedMatrix = p.array(dataMatrix[1:,1:17]) trimmedMatrix = p.array(roughTrimmedMatrix,dtype=p.float64) myMeans = p.mean(trimmedMatrix,axis=0,dtype=p.float64) conditionMeansArray = [TestID,testCondition,'UnfilteredBefore',myMeans[3], myMeans[4], myMeans[6], myMeans[9] , myMeans[10], myMeans[11], myMeans[12], myMeans[13], myMeans[14], myMeans[15]] UnFilteredDuringExSummaryOfMeansArray.append(conditionMeansArray) secondfunction(UnFilteredDuringExSummaryOfMeansArray) return def secondfunction(UnFilteredDuringExSummaryOfMeansArray): RRDuringArray = p.array(UnFilteredDuringExSummaryOfMeansArray,dtype=p.float64)[1:,3] return firstfunction() Throws this error message: File "mypath\mypythonscript.py", line 3484, in secondfunction RRDuringArray = p.array(UnFilteredDuringExSummaryOfMeansArray,dtype=p.float64)[1:,3] ValueError: setting an array element with a sequence. However, this code works: import numpy as p a=range(24) b = p.reshape(a,(6,4)) c=p.array(b,dtype=p.float64)[:,2] I re-arranged the code a bit to put it into a cogent posting, but it should more or less have the same result. Can anyone show me what to do to fix the problem in the broken code above so that it stops throwing an error message?

    Read the article

  • Return more then One field from database SQLAlchemy

    - by David Neudorfer
    This line: used_emails = [row.email for row in db.execute(select([halo4.c.email], halo4.c.email!=''))] Returns: ['[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]'] I use this to find a match: if recipient in used_emails: If it finds a match I need to pull another field (halo4.c.code) from the database in the same row. Any suggestions on how to do this?

    Read the article

  • Generating a .CSV with Several Columns - Use a Dictionary?

    - by Qanthelas
    I am writing a script that looks through my inventory, compares it with a master list of all possible inventory items, and tells me what items I am missing. My goal is a .csv file where the first column contains a unique key integer and then the remaining several columns would have data related to that key. For example, a three row snippet of my end-goal .csv file might look like this: 100001,apple,fruit,medium,12,red 100002,carrot,vegetable,medium,10,orange 100005,radish,vegetable,small,10,red The data for this is being drawn from a couple sources. 1st, a query to an API server gives me a list of keys for items that are in inventory. 2nd, I read in a .csv file into a dict that matches keys with item name for all possible keys. A snippet of the first 5 rows of this .csv file might look like this: 100001,apple 100002,carrot 100003,pear 100004,banana 100005,radish Note how any key in my list of inventory will be found in this two column .csv file that gives all keys and their corresponding item name and this list minus my inventory on hand yields what I'm looking for (which is the inventory I need to get). So far I can get a .csv file that contains just the keys and item names for the items that I don't have in inventory. Give a list of inventory on hand like this: 100003,100004 A snippet of my resulting .csv file looks like this: 100001,apple 100002,carrot 100005,radish This means that I have pear and banana in inventory (so they are not in this .csv file.) To get this I have a function to get an item name when given an item id that looks like this: def getNames(id_to_name, ids): return [id_to_name[id] for id in ids] Then a function which gives a list of keys as integers from my inventory server API call that returns a list and I've run this function like this: invlist = ServerApiCallFunction(AppropriateInfo) A third function takes this invlist as its input and returns a dict of keys (the item id) and names for the items I don't have. It also writes the information of this dict to a .csv file. I am using the set1 - set2 method to do this. It looks like this: def InventoryNumbers(inventory): with open(csvfile,'w') as c: c.write('InvName' + ',InvID' + '\n') missinginvnames = [] with open("KeyAndItemNameTwoColumns.csv","rb") as fp: reader = csv.reader(fp, skipinitialspace=True) fp.readline() # skip header invidsandnames = {int(id): str.upper(name) for id, name in reader} invids = set(invidsandnames.keys()) invnames = set(invidsandnames.values()) invonhandset = set(inventory) missinginvidsset = invids - invonhandset missinginvids = list(missinginvidsset) missinginvnames = getNames(invidsandnames, missinginvids) missinginvnameswithids = dict(zip(missinginvnames, missinginvids)) print missinginvnameswithids with open(csvfile,'a') as c: for invname, invid in missinginvnameswithids.iteritems(): c.write(invname + ',' + str(invid) + '\n') return missinginvnameswithids Which I then call like this: InventoryNumbers(invlist) With that explanation, now on to my question here. I want to expand the data in this output .csv file by adding in additional columns. The data for this would be drawn from another .csv file, a snippet of which would look like this: 100001,fruit,medium,12,red 100002,vegetable,medium,10,orange 100003,fruit,medium,14,green 100004,fruit,medium,12,yellow 100005,vegetable,small,10,red Note how this does not contain the item name (so I have to pull that from a different .csv file that just has the two columns of key and item name) but it does use the same keys. I am looking for a way to bring in this extra information so that my final .csv file will not just tell me the keys (which are item ids) and item names for the items I don't have in stock but it will also have columns for type, size, number, and color. One option I've looked at is the defaultdict piece from collections, but I'm not sure if this is the best way to go about what I want to do. If I did use this method I'm not sure exactly how I'd call it to achieve my desired result. If some other method would be easier I'm certainly willing to try that, too. How can I take my dict of keys and corresponding item names for items that I don't have in inventory and add to it this extra information in such a way that I could output it all to a .csv file? EDIT: As I typed this up it occurred to me that I might make things easier on myself by creating a new single .csv file that would have date in the form key,item name,type,size,number,color (basically just copying in the column for item name into the .csv that already has the other information for each key.) This way I would only need to draw from one .csv file rather than from two. Even if I did this, though, how would I go about making my desired .csv file based on only those keys for items not in inventory?

    Read the article

  • Prevent web2py from caching ?

    - by Joe
    Hi ! I'm working with web2py and for some reason web2py seems to fail to notice when code has changed in certain cases. I can't really narrow it down, but from time to time changes in the code are not reflected, web2py obviously has the old version cached somewhere. The only thing that helps is quitting web2py and restarting it (i'm using the internal server). Any hints ? Thank you !

    Read the article

  • Increment number in string

    - by iform
    Hi, I am stumped... I am trying to get the following output until a certain condition is met. test_1.jpg test_2.jpg .. test_50.jpg The solution (if you could remotely call it that) that I have is fileCount = 0 while (os.path.exists(dstPath)): fileCount += 1 parts = os.path.splitext(dstPath) dstPath = "%s_%d%s" % (parts[0], fileCount, parts[1]) however...this produces the following output. test_1.jpg test_1_2.jpg test_1_2_3.jpg .....etc The Question: How do I get change the number in its current place (without appending numbers to the end)? Ps. I'm using this for a file renaming tool.

    Read the article

  • How can I measure distance with tastypie and geodjango?

    - by Twitch
    Using Tastypie and GeoDjango, I'm trying to return results of buildings located within 1 mile of a point. The TastyPie documentation states that distance lookups are not yet supported, but I am finding examples of people getting it work, such as this discussion and this discussion on StackOverflow, but no working code examples that can be applied. The idea that I am trying to work with is if I append a GET command to the end of a URL, then nearby locations are returned, for example: http://website.com/api/?format=json&building_point__distance_lte=[{"type": "Point", "coordinates": [153.09537, -27.52618]},{"type": "D", "m" : 1}] But when I try that, all I get back is: {"error": "Invalid resource lookup data provided (mismatched type)."} I've been pouring over the Tastypie document for days now and just can't figure out how to implement this. I'd provide more examples, but I know they'd be all terrible. All advice is appreciated, thank you!

    Read the article

  • Search one element of a list in another list recursively

    - by androidnoob
    I have 2 lists old_name_list = [a-1234, a-1235, a-1236] new_name_list = [(a-1235, a-5321), (a-1236, a-6321), (a-1234, a-4321), ... ] I want to search recursively if the elements in old_name_list exist in new_name_list and returns the associated value with it, for eg. the first element in old_name_list returns a-4321, second element returns a-5321, and so on until old_name_list finishes. I have tried the following and it doesn't work for old_name, new_name in zip(old_name_list, new_name_list): if old_name in new_name[0]: print new_name[1] Is the method I am doing wrong or I have to make some minor changes to it? Thank you in advance.

    Read the article

  • Debugging (displaying) SQL command sent to the db by SQLAlchemy

    - by morpheous
    I have an ORM class called Person, which wraps around a person table: After setting up the connection to the db etc, I run the ff statement. people = session.query(Person).all() The person table does not contain any data (as yet), so when I print the variable people, I get an empty list. I renamed the table referred to in my ORM class People, to people_foo (which does not exist). I then run the script again. I was surprised that no exception was thrown when attempting to access a table that does not exist. I therefore have the following 2 questions: How may I setup SQLAlchemy so that it propagates db errors back to the script? How may I view (i.e. print) the SQL that is being sent to the db engine If it helps, I am using PostgreSQL as the db

    Read the article

  • if else-if making code look ugly any cleaner solution?

    - by Vishal
    I have around 20 functions (is_func1, is_fucn2, is_func3...) returning boolean I assume there is only one function which returns true and I want that! I am doing: if is_func1(param1, param2): # I pass 1 to following abc(1) # I pass 1 some_list.append(1) elif is_func2(param1, param2): # I pass 2 to following abc(2) # I pass 1 some_list.append(2) ... . . elif is_func20(param1, param2): ... Please note: param1 and param2 are different for each, abc and some_list take parameters depending on the function. The code looks big and there is repetition in calling abc and some_list, I can pull this login in a function! but is there any other cleaner solution? I can think of putting functions in a data structure and loop to call them.

    Read the article

  • How can I receive percent encoded slashes with Django on App Engine?

    - by J. Frankenstein
    I'm using Django with Google's App Engine. I want to send information to the server with percent encoded slashes. A request like http:/localhost/turtle/waxy%2Fsmooth that would match against a URL like r'^/turtle/(?P<type>([A-Za-z]|%2F)+)$'. The request gets to the server intact, but sometime before it is compared against the regex the %2F is converted into a forward slash. What can I do to stop the %2Fs from being converted into forward slashes? Thanks!

    Read the article

  • Detecting and interacting with long running process

    - by jacquesb
    I want a script to start and interact with a long running process. The process is started first time the script is executed, after that the script can be executed repeatedly, but will detect that the process is already running. The script should be able to interact with the process. I would like this to work on Unix and Windows. I am unsure how I do this. Specifically how do I detect if the process is already running and open a pipe to it? Should I use sockets (e.g. registering the server process on a known port and then check if it responds) or should I use "named pipes"? Or is there some easier way?

    Read the article

  • sip.conf configuration file - add new line to each record

    - by Flukey
    I have a sip configuration file which looks like this: [1664] username=1664 mailbox=1664@8360 host=192.168.254.3 type=friend subscribemwi=no [1679] username=1679 mailbox=1679@8360 host=192.168.254.3 type=friend subscribemwi=no [1700] username=1700 mailbox=1700@8360 host=192.168.254.3 type=friend subscribemwi=no [1701] username=1701 mailbox=1701@8360 host=192.168.254.3 type=friend subscribemwi=no For each record I need to add another line (vmxten for each record) for example the above becomes: [1664] username=1664 mailbox=1664@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1664 [1679] username=1679 mailbox=1679@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1679 [1700] username=1700 mailbox=1700@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1700 [1701] username=1701 mailbox=1701@8360 host=192.168.254.3 type=friend subscribemwi=no vmexten=1701 What would you say would be the quickest way to do this? there are hundreds of records in the file, therefore modifying all of the records by hand would take a long time. Would you use Regex? Would you use sed? I'm interested to know how you would approach the problem. Thanks

    Read the article

  • Removing Item From List - during iteration - what's wrong with this idiom ?

    - by monojohnny
    As an experiment, I did this: letters=['a','b','c','d','e','f','g','h','i','j','k','l'] for i in letters: letters.remove(i) print letters The last print shows that not all items were removed ? (every other was). IDLE 2.6.2 >>> ================================ RESTART ================================ >>> ['b', 'd', 'f', 'h', 'j', 'l'] >>> What's the explanation for this ? How it could this be re-written to remove every item ?

    Read the article

  • maching strings

    - by kiran
    Write two functions, called countSubStringMatch and countSubStringMatchRecursive that take two arguments, a key string and a target string. These functions iteratively and recursively count the number of instances of the key in the target string. You should complete definitions for def countSubStringMatch(target,key): and def countSubStringMatchRecursive (target, key): For the remaining problems, we are going to explore other substring matching ideas. These problems can be solved with either an iterative function or a recursive one. You are welcome to use either approach, though you may find iterative approaches more intuitive in these cases of matching linear structures.

    Read the article

  • Does dict.update affect a function's argspec?

    - by sbox32
    import inspect class Test: def test(self, p, d={}): d.update(p) return d print inspect.getargspec(getattr(Test, 'test'))[3] print Test().test({'1':True}) print inspect.getargspec(getattr(Test, 'test'))[3] I would expect the argspec for Test.test not to change but because of dict.update it does. Why?

    Read the article

  • Parsing a Multi-Index Excel File in Pandas

    - by rhaskett
    I have a time series excel file with a tri-level column MultiIndex that I would like to successfully parse if possible. There are some results on how to do this for an index on stack overflow but not the columns and the parse function has a header that does not seem to take a list of rows. The ExcelFile looks like is like the following: Column A is all the time series dates starting on A4 Column B has top_level1 (B1) mid_level1 (B2) low_level1 (B3) data (B4-B100+) Column C has null (C1) null (C2) low_level2 (C3) data (C4-C100+) Column D has null (D1) mid_level2 (D2) low_level1 (D3) data (D4-D100+) Column E has null (E1) null (E2) low_level2 (E3) data (E4-E100+) ... So there are two low_level values many mid_level values and a few top_level values but the trick is the top and mid level values are null and are assumed to be the values to the left. So, for instance all the columns above would have top_level1 as the top multi-index value. My best idea so far is to use transpose, but the it fills Unnamed: # everywhere and doesn't seem to work. In Pandas 0.13 read_csv seems to have a header parameter that can take a list, but this doesn't seem to work with parse.

    Read the article

  • How can I refresh wx.Frame?

    - by aF
    Hello, I have 3 panels and I want to make drags on them. The problem is that when I do a drag on one this happens: How can I refresh the frame to happear its color when the panel is no longer there?

    Read the article

  • Can pydoc/help() hide the documentation for inherited class methods and attributes?

    - by EOL
    When declaring a class that inherits from a specific class: class C(dict): added_attribute = 0 the documentation for class C lists all the methods of dict (either through help(C) or pydoc). Is there a way to hide the inherited methods from the automatically generated documentation (the documentation string can refer to the base class, for non-overwritten methods)? or is it impossible? This would be useful: pydoc lists the functions defined in a module after its classes. Thus, when the classes have a very long documentation, a lot of less than useful information is printed before the new functions provided by the module are presented, which makes the documentation harder to exploit (you have to skip all the documentation for the inherited methods until you reach something specific to the module being documented).

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

  • How to attach a line to a moving object?

    - by snow-spur
    Hello i have designed a maze and i want to draw a path between the cells as the 'person' moves from one cell to the next. So each time i move the cell a line is drawn I have done this so far but do not want to show my full code However i get an error saying Circle has no attribute center my circle which is my cell center = Point(15, 15) c = Circle(center, 12) c.setFill('blue') c.setOutline('yellow') c.draw(win) p1 = Point(c.center().getx(), c.center().gety()) this bit is in my loop p2 = Point(getx(), gety()) line = graphics.Line(p1, p2)

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >