Search Results

Search found 15224 results on 609 pages for 'parallel python'.

Page 426/609 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • How to detect a sign change for elements in a numpy array

    - by cb160
    I have a numpy array with positive and negative values in. a = array([1,1,-1,-2,-3,4,5]) I want to create another array which contains a value at each index where a sign change occurs (For example, if the current element is positive and the previous element is negative and vice versa). For the array above, I would expect to get the following result array([0,0,1,0,0,1,0]) Alternatively, a list of the positions in the array where the sign changes occur or list of booleans instead of 0's and 1's is fine.

    Read the article

  • Conditional CellRenderCombo in pyGTK TreeView

    - by Präriewolf
    I have a two column TreeView attached to a ListStore. Both columns are CellRenderCombo combo boxes. When the user selects an entry in the first box, I need to dynamically load a set of options in the second. For example, the behavior I want is: On row 0, the user selects "Alphabet" in the first column box. The second column box is populated with the letters "A-Z". On row 1, the user selects "Numbers" in the first column box. The second column box is populated with the numbers "0-9". On row 2, the user selects "Alphabet" in the first column box. The second column box is populated with the letters "A-Z". etc. Does anyone know how to do this, or seen any open source pygtk or gtk projects that have similar behavior which I can analyze?

    Read the article

  • Do not match if word appears in regex

    - by David542
    I have a url, and I want it to NOT match if the word 'season' is contained in the url. Here are two examples: CONTAINS SEASON, DO NOT MATCH 'http://imdb.com/title/tt0285331/episodes?this=1&season=7&ref_=tt_eps_sn_7' DOES NOT CONTAIN SEASON, MATCH 'http://imdb.com/title/tt0285331/ Here is what I have so far, but I'm afraid the .+ will match everything until the end. What would be the correct regex to use here? r'http://imdb.com/title/tt(\d)+/.+^[season].+'

    Read the article

  • List filtering: list comprehension vs. lambda + filter

    - by Agos
    I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items. My code looked like this: list = [i for i in list if i.attribute == value] But then i thought, wouldn't it be better to write it like this? filter(lambda x: x.attribute == value, list) It's more readable, and if needed for performance the lambda could be taken out to gain something. Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way™ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)? Thanks in advance

    Read the article

  • Pygame single push event

    - by Miller92Time
    in Pygame i am trying to translate an image by 10% in each direction using each arrow key. right now the code i am using moves the image as long as the key is pushed down, what I want is for it to move only once regardless if the key is still pushed down or not. if event.type == KEYDOWN: if (event.key == K_RIGHT): DISPLAYSURF.fill((255,255,255)) #Clears the screen translation_x(100) draw(1) if (event.key == K_LEFT): DISPLAYSURF.fill((255,255,255)) #Clears the screen translation_x(-100) draw(2) if (event.key == K_UP): DISPLAYSURF.fill((255,255,255)) #Clears the screen translation_y(100) draw(3) if (event.key == K_DOWN): DISPLAYSURF.fill((255,255,255)) #Clears the screen translation_y(-100) draw(4) is there a simpler way of implementing this besides using time.sleep

    Read the article

  • Django Find Out if User is Authenticated in Custom Tag

    - by greggory.hz
    I'm trying to create a custom tag. Inside this custom tag, I want to be able to have some logic that checks if the user is logged in, and then have the tag rendered accordingly. This is what I have: def user_actions(context): request = template.Variable('request').resolve(context) return { 'auth': request['user'].is_athenticated() } register.inclusion_tag('layout_elements/user_actions.html', takes_context=True)(user_actions) When I run this, I get this error: Caught VariableDoesNotExist while rendering: Failed lookup for key [request] in u'[{}]' The view that renders this ends like this: return render_to_response('start/home.html', {}, context_instance=RequestContext(request)) Why doesn't the tag get a RequestContext object instead of the Context object? How can I get the tag to receive the RequestContext instead of the Context? EDIT: Whether or not it's possible to get a RequestContext inside a custom tag, I'd still be interested to know the "correct" or best way to determine a user's authentication state from within the custom tag. If that's not possible, then perhaps that kind of logic belongs elsewhere? Where?

    Read the article

  • Object for storing strings geted from prints

    - by evg
    class MyWriter: def __init__(self, stdout): self.stdout = stdout self.dumps = [] def write(self, text): self.stdout.write(smart_unicode(text).encode('cp1251')) self.dumps.append(text) def close(self): self.stdout.close() writer = MyWriter(sys.stdout) save = sys.stdout sys.stdout = writer I use self.dumps list to store geted data from prints. Is it exists more convinient object for storing string lines in memory? ideally i want dump it to one big string. I can get it like this "\n".join(self.dumps) from code above. Mb it's better to just concat strings - self.dumps += text ?

    Read the article

  • Using adaptive step sizes with scipy.integrate.ode

    - by Mike
    The (brief) documentation for scipy.integrate.ode says that two methods (dopri5 and dop853) have stepsize control and dense output. Looking at the examples and the code itself, I can only see a very simple way to get output from an integrator. Namely, it looks like you just step the integrator forward by some fixed dt, get the function value(s) at that time, and repeat. My problem has pretty variable timescales, so I'd like to just get the values at whatever time steps it needs to evaluate to achieve the required tolerances. That is, early on, things are changing slowly, so the output time steps can be big. But as things get interesting, the output time steps have to be smaller. I don't actually want dense output at equal intervals, I just want the time steps the adaptive function uses.

    Read the article

  • Indexing over the results returned by selenium

    - by Guy
    Hi I try to index over results returned by an xpath. For example: xpath = '//a[@id="someID"]' can return a few results. I want to get a list of them. I thought that doing: numOfResults = sel.get_xpath_count(xpath) l = [] for i in range(1,numOfResults+1): l.append(sel.get_text('(%s)[%d]'%(xpath, i))) would work because doing something similar with firefox's Xpath checker works: (//a[@id='someID'])[2] returns the 2nd result. Ideas why the behavior would be different and how to do such a thing with selenium Thanks

    Read the article

  • Mechanize Submit Form Error: Insufficient items with name '10427'

    - by maneh
    I'm trying to submit a form with Mechanize, I have tried different ways, but the problem persists. Can anyone help me on this. Thank you in advance! This is the form I want to submit: http://www.stpairways.st/ This is the code that I'm using: def stp_airways(url): import re import mechanize br = mechanize.Browser() br.open(url) print br.title() br.select_form(name = "frmbook") br.form['TypeTrajet'] = ["1"] br.form['id_depart'] = ["11967"] br.form['id_arrivee'] = ["10427"] br.form['txtDateAller'] = "5/7/2014" br.form['txtDateRetour'] = "12/7/2014" br.form['TypePassager1u1000r0b1'] = ["1"] br.form['TypePassager2u1000r0b1'] = ["0"] br.form['TypePassager3u1000r0b1'] = ["0"] br.form['CodeIsoDeviseClient'] = ["17,20,23,24,25,26,27,28,29,30,31,33,34,36,37,64,65,67,68,70,73,80,81,95,96,103,147,151,152,159,160,162,169,170TP1TPF"] br.form['CodeIsoDeviseClient'] = ["EUR"] # submit response1 = br.submit() print response1.read()

    Read the article

  • Parsing a Multi-Index Excel File in Pandas

    - by rhaskett
    I have a time series excel file with a tri-level column MultiIndex that I would like to successfully parse if possible. There are some results on how to do this for an index on stack overflow but not the columns and the parse function has a header that does not seem to take a list of rows. The ExcelFile looks like is like the following: Column A is all the time series dates starting on A4 Column B has top_level1 (B1) mid_level1 (B2) low_level1 (B3) data (B4-B100+) Column C has null (C1) null (C2) low_level2 (C3) data (C4-C100+) Column D has null (D1) mid_level2 (D2) low_level1 (D3) data (D4-D100+) Column E has null (E1) null (E2) low_level2 (E3) data (E4-E100+) ... So there are two low_level values many mid_level values and a few top_level values but the trick is the top and mid level values are null and are assumed to be the values to the left. So, for instance all the columns above would have top_level1 as the top multi-index value. My best idea so far is to use transpose, but the it fills Unnamed: # everywhere and doesn't seem to work. In Pandas 0.13 read_csv seems to have a header parameter that can take a list, but this doesn't seem to work with parse.

    Read the article

  • How to merge or copy anonymous session data into user data when user logs in?

    - by benhoyt
    This is a general question, or perhaps a request for pointers to other open source projects to look at: I'm wondering how people merge an anonymous user's session data into the authenticated user data when a user logs in. For example, someone is browsing around your websites saving various items as favourites. He's not logged in, so they're saved to an anonymous user's data. Then he logs in, and we need to merge all that data into their (possibly existing) user data. Is this done different ways in an ad-hoc fashion for different applications? Or are there some best practices or other projects people can direct me to?

    Read the article

  • Website/App on Dotcloud is down

    - by user1576866
    The website is nhslhs.tk . The last time I edited something was four days ago. I tried to get a calendar on the Django datable, but deleted it all and never actually pushed it to the Dotcloud server. Also, few hours before that I was able to update HTML files, push them, and see the edits on the website. The link should take you to a log-in page (this is available when you google "nhslhs.tk" and click cache view) but it takes you to a search magnified advertisement-esque page. On a few sites, people claimed the error was due to a Trojan horse virus or server being down. Do you know how to fix this? Thanks!

    Read the article

  • How do I serve a large file using Pylons?

    - by Chris R
    I am writing a Pylons-based download gateway. The gateway's client will address files by ID: /file_gw/download/1 Internally, the file itself is accessed via HTTP from an internal file server: http://internal-srv/path/to/file_1.content The files may be quite large, so I want to stream the content. I store metadata about the file in a StoredFile model object: class StoredFile(Base): id = Column(Integer, primary_key=True) name = Column(String) size = Column(Integer) content_type = Column(String) url = Column(String) Given this, what's the best (ie: most architecturally-sound, performant, et al) way to write my file_gw controller?

    Read the article

  • List comprehension, map, and numpy.vectorize performance

    - by mcstrother
    I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing a: a = [foo(i) for i in xrange(100)] a = map(foo, range(100)) vfoo = numpy.vectorize(foo) a = vfoo(range(100)) (I don't care whether the output is a list or a numpy array.) Is there a better way?

    Read the article

  • gevent, sockets and syncronisation

    - by schlamar
    I have multiple greenlets sending on a common socket. Is it guaranteed that each package sent via socket.sendall is well separated or do I have to acquire a lock before each call to sendall. So I want to prevent the following scenario: g1 sends ABCD g2 sends 1234 received data is mixed up, for example AB1234CD expected is either ABCD1234 or 1234ABCD Update After a look at the sourcecode I think this scenario cannot happen. But I have to use a lock because g1 or g2 can crash on the sendall. Can someone confirm this?

    Read the article

  • How to compare two lists with duplicated items in one list?

    - by eladc
    I need to compare list_a against many others. my problem starts when there's a duplicated item in the other lists (two k's in other_b). my goal is to filter out all the lists with the same items (up to three matching items). list_a = ['j','k','a','7'] other_b = ['k', 'j', 'k', 'q'] other_c = ['k','k','9','k'] >>>filter(lambda x: not x in list_a,other_b) ['q'] I need a way that would return ['k', 'q'], because 'k' appears only once in list_a. comparing list_a and other_c with set() isn't good for my purpose since it will return only one element: k. while I need ['k','9','k'] I hope I was clear enough. Thank you

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >