Search Results

Search found 14399 results on 576 pages for 'python noob'.

Page 360/576 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Authentication using cookie key with asynchronous callback

    - by greg
    I need to write authentication function with asynchronous callback from remote Auth API. Simple authentication with login is working well, but authorization with cookie key, does not work. It should checks if in cookies present key "lp_login", fetch API url like async and execute on_response function. The code almost works, but I see two problems. First, in on_response function I need to setup secure cookie for authorized user on every page. In code user_id returns correct ID, but line: self.set_secure_cookie("user", user_id) does't work. Why it can be? And second problem. During async fetch API url, user's page has loaded before on_response setup cookie with key "user" and the page will has an unauthorized section with link to login or sign on. It will be confusing for users. To solve it, I can stop loading page for user who trying to load first page of site. Is it possible to do and how? Maybe the problem has more correct way to solve it? class BaseHandler(tornado.web.RequestHandler): @tornado.web.asynchronous def get_current_user(self): user_id = self.get_secure_cookie("user") user_cookie = self.get_cookie("lp_login") if user_id: self.set_secure_cookie("user", user_id) return Author.objects.get(id=int(user_id)) elif user_cookie: url = urlparse("http://%s" % self.request.host) domain = url.netloc.split(":")[0] try: username, hashed_password = urllib.unquote(user_cookie).rsplit(',',1) except ValueError: # check against malicious clients return None else: url = "http://%s%s%s/%s/" % (domain, "/api/user/username/", username, hashed_password) http = tornado.httpclient.AsyncHTTPClient() http.fetch(url, callback=self.async_callback(self.on_response)) else: return None def on_response(self, response): answer = tornado.escape.json_decode(response.body) username = answer['username'] if answer["has_valid_credentials"]: author = Author.objects.get(email=answer["email"]) user_id = str(author.id) print user_id # It returns needed id self.set_secure_cookie("user", user_id) # but session can's setup

    Read the article

  • How to update the filename of a Django's FileField instance ?

    - by pierre-guillaume-degans
    Hello, Here a simple django model: class SomeModel(models.Model): title = models.CharField(max_length=100) video = models.FileField(upload_to='video') I would like to save any instance so that the video's file name would be a valid file name of the title. For example, in the admin interface, I load a new instance with title "Lorem ipsum" and a video called "video.avi". The copy of the file on the server should be "Lorem Ipsum.avi" (or "Lorem_Ipsum.avi"). Thank you :)

    Read the article

  • gae error : Error: Server Error, how to debut it .

    - by zjm1126
    when i upload my project to google-app-engine , it show this : Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the query that caused it. why ? how can i debug this error ? thanks

    Read the article

  • Search a variable for an address

    - by chrissygormley
    Hello, I am trying to match information stored in a variable. I have a list of uuid's and ip addresses beside them. The code I have is: r = re.compile(r'urn:uuid:5EEF382F-JSQ9-3c45-D5E0-K15X8M8K76') m = r.match(str(serv)) if m1: print'Found' The string serv contains is: urn:uuid:7FDS890A-KD9E-3h53-G7E8-BHJSD6789D:[u'http://10.10.10.20:12365/7FDS890A-KD9E-3h53-G7E8-BHJSD6789D/'] --------------------------------------------- urn:uuid:5EEF382F-JSQ9-3c45-D5E0-K15X8M8K76:[u'http://10.10.10.10:42365'] --------------------------------------------- urn:uuid:8DSGF89S-FS90-5c87-K3DF-SDFU890US9:[u'http://10.10.10.40:5234'] --------------------------------------------- So basically I am wanting to find the uuid string and find out what it's address is and store it as a variable. So far I have just tried to get it to match the string to no avail. Can anyone point out a solution to this. Thanks

    Read the article

  • Which webserver to use with bottle?

    - by luc
    Bottle can use several webservers: Build-in HTTP development server and support for paste, fapws3, flup, cherrypy or any other WSGI capable server. I am using bottle for a desktop-app and I guess that the development server is enough in this case. I would like to know if some of you have experience with one of the alternative server. Which server for which purpose? Thanks in advance

    Read the article

  • Tokenize a command string

    - by pocoa
    I have string like this: command "http://www.mysite.com" some_param="string param" some_param2=50 I want to tokenize this string into: command "http://www.mysite.com" some_param="string param" some_param2=50 I know it's possible to split with spaces but these parameters can also be seperated by commas, like: command "http://www.mysite.com", some_param="string param", some_param2=50 I tried to do it like this: \w+\=?\"?.+\"? but it didn't work.

    Read the article

  • Designing a Tag table that tells how many times it's used

    - by Satoru.Logic
    Hi, all. I am trying to design a tagging system with a model like this: Tag: content = CharField creator = ForeignKey used = IntergerField It is a many-to-many relationship between tags and what's been tagged. Everytime I insert a record into the assotication table, Tag.used is incremented by one, and decremented by one in case of deletion. Tag.used is maintained because I want to speed up answering the question 'How many times this tag is used?'. However, this seems to slow insertion down obviously. Please tell me how to improve this design. Thanks in advance.

    Read the article

  • How to install Visual Python in Ubuntu 10.04? [closed]

    - by Glen
    I am trying to do a Physics problem in python. I need to install visual python because I get the error that it can't find the visual library when I type import visual from * The documentation on the Visual Python site is totally useless. I have gone into synaptic package manger and installed python-visual. But I still get the same error. Can someone please help?

    Read the article

  • Django ModelFormSet with Google app engine

    - by Eric
    I'm using Django with google app engine. I'm using the google furnished django app engine helper project. I'm attempting to create a Django modelformset like this: #MyModel inherits from BaseModel MyFormSet = modelformset_factory(models.MyModel) However, it's failing with this error: 'ModelOptions' object has no attribute 'fields' Apparently modelformset_factory() is expecting MyModel to implement a 'fields' accessor. Anybody successfully used a modelformset with GAE datastore? Or is this a fundamental incompatibility between Django and GAE?

    Read the article

  • Reducing size of a character array in Numpy

    - by Morgoth
    Given a character array: In [21]: x = np.array(['a ','bb ','cccc ']) One can remove the whitespace using: In [22]: np.char.strip(x) Out[22]: array(['a', 'bb', 'cccc'], dtype='|S8') but is there a way to also shrink the width of the column to the minimum required size, in the above case |S4?

    Read the article

  • In Elixir or SQLAlchemy, is there a way to also store a comment for a/each field in my entities?

    - by kchau
    Our project is basically a web interface to several systems of record. We have many tables mapped, and the names of each column aren't as well named and intuitive as we'd like... The users would like to know what data fields are available (i.e. what's been mapped from the database). But, it's pointless to just give them column names like: USER_REF1, USER_REF2, etc. So, I was wondering, is there a way to provide a comment in the declaration of my field? E.g. class SegregationCode(Entity): using_options(tablename="SEGREGATION_CODES") segCode = Field(String(20), colname="CODE", ... primary_key=True) #Have a comment attr too? If not, any suggestions?

    Read the article

  • In Django, how to create tables from an SQL file when syncdb is run

    - by Sidney
    Hi, How do I make syncdb execute SQL queries (for table creation) defined by me, rather then generating tables automatically. I'm looking for this solution as some particular models in my app represent SQL-table-views for a legacy-database table. So, I've created their SQL-views in my django-DB like this: CREATE VIEW legacy_series AS SELECT * FROM legacy.series; I have a reverse engineered model that represents the above view/legacytable. But whenever I run syncdb, I have to create all the views first by running sql scripts, otherwise syncdb simply creates tables for them (if a view is not found). How do I make syncdb run the above mentioned SQL?

    Read the article

  • How do you efficiently bulk index lookups?

    - by Liron Shapira
    I have these entity kinds: Molecule Atom MoleculeAtom Given a list(molecule_ids) whose lengths is in the hundreds, I need to get a dict of the form {molecule_id: list(atom_ids)}. Likewise, given a list(atom_ids) whose length is in the hunreds, I need to get a dict of the form {atom_id: list(molecule_ids)}. Both of these bulk lookups need to happen really fast. Right now I'm doing something like: atom_ids_by_molecule_id = {} for molecule_id in molecule_ids: moleculeatoms = MoleculeAtom.all().filter('molecule =', db.Key.from_path('molecule', molecule_id)).fetch(1000) atom_ids_by_molecule_id[molecule_id] = [ MoleculeAtom.atom.get_value_for_datastore(ma).id() for ma in moleculeatoms ] Like I said, len(molecule_ids) is in the hundreds. I need to do this kind of bulk index lookup on almost every single request, and I need it to be FAST, and right now it's too slow. Ideas: Will using a Molecule.atoms ListProperty do what I need? Consider that I am storing additional data on the MoleculeAtom node, and remember it's equally important for me to do the lookup in the molecule-atom and atom-molecule directions. Caching? I tried memcaching lists of atom IDs keyed by molecule ID, but I have tons of atoms and molecules, and the cache can't fit it. How about denormalizing the data by creating a new entity kind whose key name is a molecule ID and whose value is a list of atom IDs? The idea is, calling db.get on 500 keys is probably faster than looping through 500 fetches with filters, right?

    Read the article

  • Convert args to flat list?

    - by Mark
    I know this is very similar to a few other questions, but I can't quite get this function to work correctly. def flatten(*args): return list(item for iterable in args for item in iterable) The output I'm looking for is: flatten(1) -> [1] flatten(1,[2]) -> [1, 2] flatten([1,[2]]) -> [1, 2] The current function, which I from another SO answer doesn't seem to produce correct results at all: >>> flatten([1,[2]]) [1, [2]] I wrote the following function which seems to work for 0 or 1 levels of nesting, but not deeper: def flatten(*args): output = [] for arg in args: if hasattr(arg, '__iter__'): output += arg else: output += [arg] return output

    Read the article

  • Sending data from one Protocol to another Protocol in Twisted?

    - by veb
    Hi! One of my protocols is connected to a server, and with the output of that I'd like to send it to the other protocol. I need to access the 'msg' method in ClassA from ClassB but I keep getting: exceptions.AttributeError: 'NoneType' object has no attribute 'write' Actual code: http://pastebin.com/MQPhduSY Any ideas please? :-)

    Read the article

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • Storing a list of objects in GAE

    - by Dominic Bou-Samra
    I need to store some data that looks a little like this: xyz 123 abc 456 hij 678 rer 838 Now I would just store it as a traditional string and integer model, and put in the datastore. But the data changes regularly, and is ONLY relevant when looked at as a COLLECTION. So it needs to be store as either a list of lists, or a list of objects, both of which can't really be done without pickling as far as I know. Can anyone help? Even storing it as a text file may work :S

    Read the article

  • Django: Admin with multiple sites & languages

    - by lazerscience
    Hi everybody! I'm supposed to build some Django apps, that allow you to administer multiple sites through one backend. The contrib.sites framework is quite perfect for my purposes. I can run multiple instances of manage.py with different settings for each site; but how should django's admin deal with different settings for different sites, eg. if they have different sets of languages, a different (default) language? So there are some problem s to face if you have to work on objects coming from different sites in one admin... I think settings.ADMIN_FOR is supposed to be quite helpful for cases like this, but theres hardly any documentation about it and I think it's not really used in the actual Django version (?). So any ideas/solutions are welcome and much appreciated! Thanks a lot...

    Read the article

  • How to get the original variable name of variable passed to a function

    - by Acorn
    Is it possible to get the original variable name of a variable passed to a function? E.g. foobar = "foo" def func(var): print var.origname So that: func(foobar) Returns: >>foobar EDIT: All I was trying to do was make a function like: def log(soup): f = open(varname+'.html', 'w') print >>f, soup.prettify() f.close() .. and have the function generate the filename from the name of the variable passed to it. I suppose if it's not possible I'll just have to pass the variable and the variable's name as a string each time.

    Read the article

  • Best way to get back to using the power of lxml after having to use a regex to find something in an

    - by PyNEwbie
    I am trying to rip some text out of a large number of html documents (numbers in the hundreds of thousands). The documents are really forms but they are prepared by a very large group of different organizations so there is significant variation in how they create the document. For example, the documents are divided into chapters. I might want to extract the contents of Chapter 5 from every document so I can analyze the content of the chapter. Initially I thought this would be easy but it turns out that the authors might use a set of non-nested tables throughout the document to hold the content so that Chapter n could be displayed using td tags inside a table. Or they might use other elements such as p tags H tags, div tags or any other block level element. After trying repeatedly to use lxml to help me identify the beginning and end of each chapter I have determined that it is a lot cleaner to use a regular expression because in every case, no matter what the enclosing html element is the chapter label is always in the form of >Chapter # It is a little more complicated in that there might be some white space or non-breaking space represented in different ways (  or   or just spaces). Nonetheless it was trivial to write a regular expression to identify the beginning of each section. (The beginning of one section is the end of the previous section.) But now I want to use lxml to get the text out. My thought is that I have really no choice but to walk along my string to find the close tag for the element that encloses the text I am using to find the relevant section. That is here is one example where the element holding the Chapter name is a div <div style="DISPLAY: block; MARGIN-LEFT: 0pt; TEXT-INDENT: 0pt; MARGIN-RIGHT: 0pt" align="left"><font style="DISPLAY: inline; FONT-WEIGHT: bold; FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman">Chapter 1.&#160;&#160;&#160;Our Beginnings.</font></div> So I am imagining that I would begin at the location where I found the match for chapter 1 and set up a regular expressions to find the next </div|</td|</p|</h1 . . . So at this point I have identified the type of element holding my chapter heading I can use the same logic to find all of the text that is within that element that is set up a regular expression to help me mark from >Chapter 1.&#160;&#160;&#160;Our Beginnings.< So I have identified where my Chapter 1 begins I can do the same for chapter 2 (which is where Chapter 1 ends) Now I am imagining that I am going to snip the document beginning at the opening of the element that I identified as the element the indicates where chapter 1 begins and ending just before the opening of the element that I identified as the element that indicates where Chapter 2 begins. The string that I have identified will then be fed to lxml to use its power to get the content. I am going to all of this trouble because I have read over and over - never use a regular expression to extract content from html documents and I have not hit on a way to be as accurate with lxml to identify the starting and ending locations for the text I want to extract. For example, I can never be certain that the subtitle of Chapter 1 is Our Beginnings it could be Our Red Canary. Let me say that I spent two solid days trying with lxml to be confident that I had the beginning and ending elements and I could only be accurate <60% of the time but a very short regular expression has given me better than 95% success. I have a tendency to make things more complicated than necessary so I am wondering if anyone has seen or solved a similar problems and if they had an approach (not the details mind you) that they would like to offer.

    Read the article

  • Convert a sequence of sequences to a dictionary and vice-versa

    - by louis
    One way to manually persist a dictionary to a database is to flatten it into a sequence of sequences and pass the sequence as an argument to cursor.executemany(). The opposite is also useful, i.e. reading rows from a database and turning them into dictionaries for later use. What's the best way to go from myseq to mydict and from mydict to myseq? >>> myseq = ((0,1,2,3), (4,5,6,7), (8,9,10,11)) >>> mydict = {0: (1, 2, 3), 8: (9, 10, 11), 4: (5, 6, 7)}

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >