Search Results

Search found 17845 results on 714 pages for 'python social auth'.

Page 416/714 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • Simplifying for-if messes with better structure?

    - by HH
    # Description: you are given a bitwise pattern and a string # you need to find the number of times the pattern matches in the string # any one liner or simple pythonic solution? import random def matchIt(yourString, yourPattern): """find the number of times yourPattern occurs in yourString""" count = 0 matchTimes = 0 # How can you simplify the for-if structures? for coin in yourString: #return to base if count == len(pattern): matchTimes = matchTimes + 1 count = 0 #special case to return to 2, there could be more this type of conditions #so this type of if-conditionals are screaming for a havoc if count == 2 and pattern[count] == 1: count = count - 1 #the work horse #it could be simpler by breaking the intial string of lenght 'l' #to blocks of pattern-length, the number of them is 'l - len(pattern)-1' if coin == pattern[count]: count=count+1 average = len(yourString)/matchTimes return [average, matchTimes] # Generates the list myString =[] for x in range(10000): myString= myString + [int(random.random()*2)] pattern = [1,0,0] result = matchIt(myString, pattern) print("The sample had "+str(result[1])+" matches and its size was "+str(len(myString))+".\n" + "So it took "+str(result[0])+" steps in average.\n" + "RESULT: "+str([a for a in "FAILURE" if result[0] != 8])) # Sample Output # # The sample had 1656 matches and its size was 10000. # So it took 6 steps in average. # RESULT: ['F', 'A', 'I', 'L', 'U', 'R', 'E']

    Read the article

  • Filtering with joined tables

    - by viraptor
    I'm trying to get some query performance improved, but the generated query does not look the way I expect it to. The results are retrieved using: query = session.query(SomeModel). options(joinedload_all('foo.bar')). options(joinedload_all('foo.baz')). options(joinedload('quux.other')) What I want to do is filter on the table joined via 'first', but this way doesn't work: query = query.filter(FooModel.address == '1.2.3.4') It results in a clause like this attached to the query: WHERE foos.address = '1.2.3.4' Which doesn't do the filtering in a proper way, since the generated joins attach tables foos_1 and foos_2. If I try that query manually but change the filtering clause to: WHERE foos_1.address = '1.2.3.4' AND foos_2.address = '1.2.3.4' It works fine. The question is of course - how can I achieve this with sqlalchemy itself?

    Read the article

  • pyInotify performance

    - by tranimatronic
    I have a very large directory tree I am wanting pyInotify to watch. Is it better to have pyInotify watch the entire tree or is it better to have a number of watches reporting changes to specific files ? Thanks

    Read the article

  • JQuery cookie access has stopped working for GAE app

    - by Greg
    I have a google app engine app that has been running for some time, and some javascript code that checks for a login cookie has suddenly stopped working. As far as I can tell, NO code has changed. The relevant code uses the jquery cookies plugin (jquery.cookies.2.2.0.min.js)... // control the default screen depending // if someone is logged in if( $.cookies.get('dev_appserver_login') != null || $.cookies.get('ACSID') != null ) { alert("valid cookie!") $("#inventory-container").show(); } else { alert("INvalid cookie!") $("#welcome-container").show(); } The reason for the two checks is that in the GAE SDK, the cookies are named differently. The production system uses 'ACSID'. This if statement works in the SDK and now fails 100% of the time in production. I have verified that the cookie is, in fact, present when I inspect the page. Thoughts?

    Read the article

  • Improving the join of two wave file?

    - by kaki
    I have written a code for joining two wave files.It works fine when i am joining larger segments but as i need to join very small segments the clarity is not good. I have learned that the signal processing technique such a windowed join can be used to improve the joining of file. y[n] = w[n]s[n] Multiply value of signal at sample number n by the value of a windowing function hamming window w[n]= .54 - .46*cos(2*Pi*n)/L 0 I am not understanding how to get the value to signal at sample n and how to implement this?? the code i am using for joining is import wave m=['C:/begpython/S0001_0002.wav', 'C:/begpython/S0001_0001.wav'] i=1 a=m[i] infiles = [a, "C:/begpython/S0001_0002.wav", a] outfile = "C:/begpython/S0001_00367.wav" data= [] data1=[] for infile in infiles: w = wave.open(infile, 'rb') data1=[w.getnframes] data.append( [w.getparams(), w.readframes(w.getnframes())] ) #data1 = [ord(character) for character in data1] #print data1 #data1 = ''.join(chr(character) for character in data1) w.close() output = wave.open(outfile, 'wb') output.setparams(data[0][0]) output.writeframes(data[0][1]) output.writeframes(data[1][1]) output.writeframes(data[2][1]) output.close()

    Read the article

  • Problems using User model in django unit tests

    - by theycallmemorty
    I have the following django test case that is giving me errors: class MyTesting(unittest.TestCase): def setUp(self): self.u1 = User.objects.create(username='user1') self.up1 = UserProfile.objects.create(user=self.u1) def testA(self): ... def testB(self): ... When I run my tests, testA will pass sucessfully but before testB starts, I get the following error: IntegrityError: column username is not unique It's clear that it is trying to create self.u1 before each test case and finding that it already exists in the Database. How do I get it to properly clean up after each test case so that subsequent cases run correctly?

    Read the article

  • Pickling a class definition

    - by Giorgio
    Is there a way to pickle a class definition? What I'd like to do is pickle the definition (which may created dynamically), and then send it over a TCP connection so that an instance can be created on the other end. I understand that there may be dependencies, like modules and global variables that the class relies on. I'd like to bundle these in the pickling process as well, but I'm not concerned about automatically detecting the dependencies because it's okay if the onus is on the user to specify them.

    Read the article

  • How to clear wxpython frame content when dragging a panel?

    - by aF
    Hello, I have 3 panels and I want to make drags on them. The problem is that when I do a drag on one this happens: How can I refresh the frame to happear its color when the panel is no longer there? This is the code that I have to make the drag: def onMouseMove(self, event): (self.pointWidth, self.pointHeight) = event.GetPosition() (self.width, self.height) = self.GetSizeTuple() if (self.pointWidth>100 and self.pointWidth<(self.width-100) and self.pointHeight < 15) or self.parent.dragging: self.SetCursor(wx.StockCursor(wx.CURSOR_SIZING)) """implement dragging""" if not event.Dragging(): self.w = 0 self.h = 0 return self.CaptureMouse() if self.w == 0 and self.h == 0: (self.w, self.h) = event.GetPosition() else: (posw, posh) = event.GetPosition() displacement = self.h - posh self.SetPosition( self.GetPosition() - (0, displacement)) else: self.SetCursor(wx.StockCursor(wx.CURSOR_ARROW)) def onDraggingDown(self, event): if self.pointWidth>100 and self.pointWidth<(self.width-100) and self.pointHeight < 15: self.parent.dragging = 1 self.SetCursor(wx.StockCursor(wx.CURSOR_ARROW)) self.SetBackgroundColour('BLUE') self.parent.SetTransparent(220) self.Refresh() def onDraggingUp(self, event): self.parent.dragging = 0 self.parent.SetTransparent(255) self.SetCursor(wx.StockCursor(wx.CURSOR_ARROW)) and this are the binds for this events: self.Bind(wx.EVT_MOTION, self.onMouseMove) self.Bind(wx.EVT_LEFT_DOWN, self.onDraggingDown) self.Bind(wx.EVT_LEFT_UP, self.onDraggingUp) With this, if I click on the top of the panel, and move down or up, the panel position changes (I drag the panel) to the position of the mouse.

    Read the article

  • Import Error: No module named testrunner

    - by JiL
    I followed this to add zc.recipe.testrunner to my buildout. I can run buildout successfully but when I run bin/test, I get: ImportError: No module named testrunner I have zope.testrunner-4.0.4-py2.4.egg in /usr/local/lib/python2.4/site-packages I also pinned zope.testrunner = 4.0.4 zc.recipe.testruner = 1.4.0 zc.recipe.egg = 1.3.2 When I ran buildout, I used -vvv and I got: ... Installing 'zc.recipe.testrunner'. We have the distribution that satisfies 'zc.recipe.testrunner==1.4.0'. Egg from site-packages: z3c.recipe.scripts 1.0.1 Egg from site-packages: zope.testrunner 4.0.4 Egg from site-packages: zope.interface 3.8.0 Egg from site-packages: zope.exceptions 3.7.1 ... We have the distribution that satisfies 'zope.testrunner==4.0.4'. Egg from site-packages: zope.testrunner 4.0.4 Adding required 'zope.interface' required by zope.testrunner 4.0.4. We have a develop egg: zope.interface 0.0 Adding required 'zope.exceptions' required by zope.testrunner 4.0.4. We have a develop egg: zope.exceptions 0.0 ... Why is it I get an ImportError? Is zope.testrunner not installed correctly?

    Read the article

  • why is this dictionary line number count not working?

    - by jad
    i have this piece of code the last bit of the code starting from d = {} im trying to print the words with its line number located in the text but it is not working its only printing the words anyone know why ??? need help ASAP import sys import string text = [] infile = open(sys.argv[1], 'r').read() for punct in string.punctuation: infile = infile.replace(punct, "") text = infile.split() dict = open(sys.argv[2], 'r').read() dictset = [] dictset = dict.split() words = [] words = list(set(text) - set(dictset)) words = [text.lower() for text in words] words.sort() d = {} counter = 0 for lines in text: counter += 1 if word not in d: d[words] = [counter] else: d[words.append[counter] print(word, d)

    Read the article

  • SQLAlchemy sessions - DetachedInstanceError?

    - by benjaminhkaiser
    I have a function that attempts to take a list of usernames, look each one up in a user table, and then add them to a membership table. If even one username is invalid, I want the entire list to be rolled back, including any users that have already been processed. I thought that using sessions was the best way to do this but I'm running into a DetachedInstanceError: DetachedInstanceError: Instance <Organization at 0x7fc35cb5df90> is not bound to a Session; attribute refresh operation cannot proceed Full stack trace is here. The error seems to trigger when I attempt to access the user (model) object that is returned by the query. From my reading I understand that it has something to do with there being multiple sessions, but none of the suggestions I saw on other threads worked for me. Code is below: def add_members_in_bulk(organization_eid, users): """Add users to an organization in bulk - helper function for add_member()""" """Returns "success" on success and id of first failed student on failure""" session = query_session.get_session() session.begin_nested() users = users.split('\n') for u in users: try: user = user_lookup.by_student_id(u) except ObjectNotFoundError: session.rollback() return u if user: membership.add_user_to_organization( user.entity_id, organization_eid, '', [] ) session.flush() session.commit() return 'success' here's the membership.add_user_to_organization: def add_user_to_organization(user_eid, organization_eid, title, tag_ids): """Add a User to an Organization with the given title""" user = user_lookup.by_eid(user_eid) organization = organization_lookup.by_eid(organization_eid) new_membership = OrganizationMembership( organization_eid=organization.entity_id, user_eid=user.entity_id, title=title) new_membership.tags = [get_tag_by_id(tag_id) for tag_id in tag_ids] crud.add(new_membership) and here is the lookup by ID query: def by_student_id(student_id, include_disabled=False): """Get User by RIN""" try: return get_query_set(include_disabled).filter(User.student_id == student_id).one() except NoResultFound: raise ObjectNotFoundError("User with RIN %s does not exist." % student_id)

    Read the article

  • Shuttle control in wxPython

    - by Mridang Agarwalla
    Hi, I'm trying to implement a shuttle control in wxPython but there doesn't seem to be one. I've decided to use two listbox controls. The shuttle control looks like this: I've got two listboxes — one's populated, one's not. Could someone show me how to add a selected item to the second list box when it is double clicked? It should be removed from the first. When it is double clicked in the second, it should be added to the first and removed from the second. The shuttle control implements these by default but it's a pity it isn't there. Thank you.

    Read the article

  • Can I get the amount of time for which a key is pressed on a keyboard

    - by Adi
    Dear all, I am working on a project in which I have to develop bio-passwords based on user's keystroke style. Suppose a user types a password for 20 times, his keystrokes are recorded, like holdtime : time for which a particular key is pressed. digraph time : time it takes to press a different key. suppose a user types a password " COMPUTER". I need to know the time for which every key is pressed. something like : holdtime for the above password is C-- 200ms O-- 130ms M-- 150ms P-- 175ms U-- 320ms T-- 230ms E-- 120ms R-- 300ms The rational behind this is , every user will have a different holdtime. Say a old person is typing the password, he will take more time then a student. And it will be unique to a particular person. To do this project, I need to record the time for each key pressed. I would greatly appreciate if anyone can guide me in how to get these times. Editing from here.. Language is not important, but I would prefer it in C. I am more interested in getting the dataset.

    Read the article

  • differences between "d.clear()" and "d={}"

    - by Tshepang
    On my machine, the execution speed between "d.clear()" and "d={}" is over 100ns so am curious why one would use one over the other. import timeit def timing(): d = dict() if __name__=='__main__': t = timeit.Timer('timing()', 'from __main__ import timing') print t.repeat()

    Read the article

  • Why do all module run together?

    - by gunbuster363
    I just made a fresh copy of eclipse and installed pydev. In my first trial to use pydev with eclipse, I created 2 module under the src package(the default one) FirstModule.py: ''' Created on 18.06.2009 @author: Lars Vogel ''' def add(a,b): return a+b def addFixedValue(a): y = 5 return y +a print "123" run.py: ''' Created on Jun 20, 2011 @author: Raymond.Yeung ''' from FirstModule import add print add(1,2) print "Helloword" When I pull out the pull down menu of the run button, and click "ProjectName run.py", here is the result: 123 3 Helloword Apparantly both module ran, why? Is this the default setting?

    Read the article

  • Django: Serving a Download in a Generic View

    - by TheLizardKing
    So I want to serve a couple of mp3s from a folder in /home/username/music. I didn't think this would be such a big deal but I am a bit confused on how to do it using generic views and my own url. urls.py url(r'^song/(?P<song_id>\d+)/download/$', song_download, name='song_download'), The example I am following is found in the generic view section of the Django documentations: http://docs.djangoproject.com/en/dev/topics/generic-views/ (It's all the way at the bottom) I am not 100% sure on how to tailor this to my needs. Here is my views.py def song_download(request, song_id): song = Song.objects.get(id=song_id) response = object_detail( request, object_id = song_id, mimetype = "audio/mpeg", ) response['Content-Disposition'= "attachment; filename=%s - %s.mp3" % (song.artist, song.title) return response I am actually at a loss of how to convey that I want it to spit out my mp3 instead of what it does now which is to output a .mp3 with all of the current pages html contained. Should my template be my mp3? Do I need to setup apache to serve the files or is Django able to retrieve the mp3 from the filesystem(proper permissions of course) and serve that? If it do need to configure Apache how do I tell Django that? Thanks in advanced. These files are all on the HD so I don't need to "generate" anything on the spot and I'd like to prevent revealing the location of these files if at all possible. A simple /song/1234/download would be fantastic.

    Read the article

  • supervisord environment variables setting up application

    - by user1434844
    I'm running an application from supervisord and I have to set up an environment for it. There are about 30 environment variables that need to be set. I've tried putting all on one big environment= line and that doesn't seem to work. I've also tried multiple enviroment= lines, and that doesn't seem to work either. I've also tried both with and without ' around the env value. What's the best way to set up my environment such that it remains intact under supervisord control? Should I be calling my actual program (tornado, fwiw) from a shell script with the environment preloaded there? Ideally, I'd like to put all of the enviroment variables into an include file and load them with supervisor, but I'm open to doing it another way.

    Read the article

  • Simple App Engine Sessions Implementation

    - by raz0r
    Here is a very basic class for handling sessions on App Engine: """Lightweight implementation of cookie-based sessions for Google App Engine. Classes: Session """ import os import random import Cookie from google.appengine.api import memcache _COOKIE_NAME = 'app-sid' _COOKIE_PATH = '/' _SESSION_EXPIRE_TIME = 180 * 60 class Session(object): """Cookie-based session implementation using Memcached.""" def __init__(self): self.sid = None self.key = None self.session = None cookie_str = os.environ.get('HTTP_COOKIE', '') self.cookie = Cookie.SimpleCookie() self.cookie.load(cookie_str) if self.cookie.get(_COOKIE_NAME): self.sid = self.cookie[_COOKIE_NAME].value self.key = 'session-' + self.sid self.session = memcache.get(self.key) if self.session: self._update_memcache() else: self.sid = str(random.random())[5:] + str(random.random())[5:] self.key = 'session-' + self.sid self.session = dict() memcache.add(self.key, self.session, _SESSION_EXPIRE_TIME) self.cookie[_COOKIE_NAME] = self.sid self.cookie[_COOKIE_NAME]['path'] = _COOKIE_PATH print self.cookie def __len__(self): return len(self.session) def __getitem__(self, key): if key in self.session: return self.session[key] raise KeyError(str(key)) def __setitem__(self, key, value): self.session[key] = value self._update_memcache() def __delitem__(self, key): if key in self.session: del self.session[key] self._update_memcache() return None raise KeyError(str(key)) def __contains__(self, item): try: i = self.__getitem__(item) except KeyError: return False return True def _update_memcache(self): memcache.replace(self.key, self.session, _SESSION_EXPIRE_TIME) I would like some advices on how to improve the code for better security. Note: In the production version it will also save a copy of the session in the datastore. Note': I know there are much more complete implementations available online though I would like to learn more about this subject so please don't answer the question with "use that" or "use the other" library.

    Read the article

  • google contacts api service account oauth2.0 sub user

    - by user3709507
    I am trying to use the Google Contacts API to connect to a user's contact information, on my Google apps domain. Generating an access_token using the gdata api's ContactsService clientlogin function while using the API key for my project works fine, but I would prefer to not store the user's credentials, and from the information I have found that method uses OAuth1.0 So, to use OAuth2.0 I have: Generated a Service Account in the developer's console for my project Granted access to the service account for the scope of https://www.google.com/m8/feeds/ in the Google apps domain admin panel Attempted to generate credentials using SignedJwtAssertionCredentials: credentials = SignedJwtAssertionCredentials( service_account_name=service_account_email, private_key=key_from_p12_file, scope='https://www.google.com/m8/feeds/', sub=user_email') The problem I am running into is that attempting to generate an access token using this method fails. It succeeds in generating the token when I remove the sub parameter, but then that token fails when I try to fetch the user's contacts. Does anyone know why this might be happening?

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >