Search Results

Search found 13534 results on 542 pages for 'python 2 6'.

Page 357/542 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • creating message box as sheets for mac in PyQt

    - by user971306
    I used message box as seperate dialog instead of sheets for mac OS, now i m working on it to spawn a sheet as message box instead of seperate one. I have tried setting the message box as a modal one: (messagebox.setWindowModality(QtCore.Qt.WindoModal)) and setting message box, parent dialog window flags as sheet (parentDialog.setWindowFlags(QtCore.Qt.Sheet) messagebox.setWindowFlags(QtCore.Qt.Sheet)) But the above commands are not working to create a sheet instead of seperate dialog. Does anyone have an idea of how to solve?

    Read the article

  • Sending data from one Protocol to another Protocol in Twisted?

    - by veb
    Hi! One of my protocols is connected to a server, and with the output of that I'd like to send it to the other protocol. I need to access the 'msg' method in ClassA from ClassB but I keep getting: exceptions.AttributeError: 'NoneType' object has no attribute 'write' Actual code: http://pastebin.com/MQPhduSY Any ideas please? :-)

    Read the article

  • How to install Visual Python in Ubuntu 10.04? [closed]

    - by Glen
    I am trying to do a Physics problem in python. I need to install visual python because I get the error that it can't find the visual library when I type import visual from * The documentation on the Visual Python site is totally useless. I have gone into synaptic package manger and installed python-visual. But I still get the same error. Can someone please help?

    Read the article

  • How do you efficiently bulk index lookups?

    - by Liron Shapira
    I have these entity kinds: Molecule Atom MoleculeAtom Given a list(molecule_ids) whose lengths is in the hundreds, I need to get a dict of the form {molecule_id: list(atom_ids)}. Likewise, given a list(atom_ids) whose length is in the hunreds, I need to get a dict of the form {atom_id: list(molecule_ids)}. Both of these bulk lookups need to happen really fast. Right now I'm doing something like: atom_ids_by_molecule_id = {} for molecule_id in molecule_ids: moleculeatoms = MoleculeAtom.all().filter('molecule =', db.Key.from_path('molecule', molecule_id)).fetch(1000) atom_ids_by_molecule_id[molecule_id] = [ MoleculeAtom.atom.get_value_for_datastore(ma).id() for ma in moleculeatoms ] Like I said, len(molecule_ids) is in the hundreds. I need to do this kind of bulk index lookup on almost every single request, and I need it to be FAST, and right now it's too slow. Ideas: Will using a Molecule.atoms ListProperty do what I need? Consider that I am storing additional data on the MoleculeAtom node, and remember it's equally important for me to do the lookup in the molecule-atom and atom-molecule directions. Caching? I tried memcaching lists of atom IDs keyed by molecule ID, but I have tons of atoms and molecules, and the cache can't fit it. How about denormalizing the data by creating a new entity kind whose key name is a molecule ID and whose value is a list of atom IDs? The idea is, calling db.get on 500 keys is probably faster than looping through 500 fetches with filters, right?

    Read the article

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • Regex for Matching First Alphanumeric Character skipping (The |An? )

    - by TheLizardKing
    I have a list of artists, albums and tracks that I want to sort using the first letter of their respective name. The issue arrives when I want to ignore "The ", "A ", "An " and other various non-alphanumeric characters (Talking to you "Weird Al" Yankovic and [dialog]). Django has a nice start '^(An?|The) +' but I want to ignore those and a few others of my choice. I am doing this in Django, using a MySQL db with utf8_bin collation.

    Read the article

  • Storing a list of objects in GAE

    - by Dominic Bou-Samra
    I need to store some data that looks a little like this: xyz 123 abc 456 hij 678 rer 838 Now I would just store it as a traditional string and integer model, and put in the datastore. But the data changes regularly, and is ONLY relevant when looked at as a COLLECTION. So it needs to be store as either a list of lists, or a list of objects, both of which can't really be done without pickling as far as I know. Can anyone help? Even storing it as a text file may work :S

    Read the article

  • Which webserver to use with bottle?

    - by luc
    Bottle can use several webservers: Build-in HTTP development server and support for paste, fapws3, flup, cherrypy or any other WSGI capable server. I am using bottle for a desktop-app and I guess that the development server is enough in this case. I would like to know if some of you have experience with one of the alternative server. Which server for which purpose? Thanks in advance

    Read the article

  • In Elixir or SQLAlchemy, is there a way to also store a comment for a/each field in my entities?

    - by kchau
    Our project is basically a web interface to several systems of record. We have many tables mapped, and the names of each column aren't as well named and intuitive as we'd like... The users would like to know what data fields are available (i.e. what's been mapped from the database). But, it's pointless to just give them column names like: USER_REF1, USER_REF2, etc. So, I was wondering, is there a way to provide a comment in the declaration of my field? E.g. class SegregationCode(Entity): using_options(tablename="SEGREGATION_CODES") segCode = Field(String(20), colname="CODE", ... primary_key=True) #Have a comment attr too? If not, any suggestions?

    Read the article

  • Django ModelFormSet with Google app engine

    - by Eric
    I'm using Django with google app engine. I'm using the google furnished django app engine helper project. I'm attempting to create a Django modelformset like this: #MyModel inherits from BaseModel MyFormSet = modelformset_factory(models.MyModel) However, it's failing with this error: 'ModelOptions' object has no attribute 'fields' Apparently modelformset_factory() is expecting MyModel to implement a 'fields' accessor. Anybody successfully used a modelformset with GAE datastore? Or is this a fundamental incompatibility between Django and GAE?

    Read the article

  • Tokenize a command string

    - by pocoa
    I have string like this: command "http://www.mysite.com" some_param="string param" some_param2=50 I want to tokenize this string into: command "http://www.mysite.com" some_param="string param" some_param2=50 I know it's possible to split with spaces but these parameters can also be seperated by commas, like: command "http://www.mysite.com", some_param="string param", some_param2=50 I tried to do it like this: \w+\=?\"?.+\"? but it didn't work.

    Read the article

  • Reducing size of a character array in Numpy

    - by Morgoth
    Given a character array: In [21]: x = np.array(['a ','bb ','cccc ']) One can remove the whitespace using: In [22]: np.char.strip(x) Out[22]: array(['a', 'bb', 'cccc'], dtype='|S8') but is there a way to also shrink the width of the column to the minimum required size, in the above case |S4?

    Read the article

  • Trying to set up nested while loops using a boolean switch

    - by thorn100
    First time posting. I'm trying to set up a while loop that will ask the user for the employee name, hours worked and hourly wage until the user enters 'DONE'. Eventually I'll modify the code to calculate the weekly pay and write it to a list, but one thing at a time. The problem is once the main while loop executes once, it just stops. Doesn't error out but just stops. I have to kill the program to get it to stop. I want it to ask the three questions again and again until the user is finished. Thoughts? Please note that this is just an exercise and not meant for any real world application. def getName(): """Asks for the employee's full name""" firstName=raw_input("\nWhat is your first name? ") lastName=raw_input("\nWhat is your last name? ") fullName=firstName.title() + " " + lastName.title() return fullName def getHours(): """Asks for the number of hours the employee worked""" hoursWorked=0 while int(hoursWorked)<1 or int(hoursWorked) > 60: hoursWorked=raw_input("\nHow many hours did the employee work: ") if int(hoursWorked)<1 or int(hoursWorked) > 60: print "Please enter an integer between 1 and 60." else: return hoursWorked def getWage(): """Asks for the employee's hourly wage""" wage=0 while float(wage)<6.00 or float(wage)>20.00: wage=raw_input("\nWhat is the employee's hourly wage: ") if float(wage)<6.00 or float(wage)>20.00: print ("Please enter an hourly wage between $6.00 and $20.00") else: return wage ##sentry variables employeeName="" employeeHours=0 employeeWage=0 booleanDone=False #Enter employee info print "Please enter payroll information for an employee or enter 'DONE' to quit." while booleanDone==False: while employeeName=="": employeeName=getName() if employeeName.lower()=="done": booleanDone=True break print "The employee's name is", employeeName while employeeHours==0: employeeHours=getHours() if employeeHours.lower()=="done": booleanDone=True break print employeeName, "worked", employeeHours, "this week." while employeeWage==0: employeeWage=getWage() if employeeWage.lower()=="done": booleanDone=True break print employeeName + "'s hourly wage is $" + employeeWage

    Read the article

  • Designing a Tag table that tells how many times it's used

    - by Satoru.Logic
    Hi, all. I am trying to design a tagging system with a model like this: Tag: content = CharField creator = ForeignKey used = IntergerField It is a many-to-many relationship between tags and what's been tagged. Everytime I insert a record into the assotication table, Tag.used is incremented by one, and decremented by one in case of deletion. Tag.used is maintained because I want to speed up answering the question 'How many times this tag is used?'. However, this seems to slow insertion down obviously. Please tell me how to improve this design. Thanks in advance.

    Read the article

  • Convert a sequence of sequences to a dictionary and vice-versa

    - by louis
    One way to manually persist a dictionary to a database is to flatten it into a sequence of sequences and pass the sequence as an argument to cursor.executemany(). The opposite is also useful, i.e. reading rows from a database and turning them into dictionaries for later use. What's the best way to go from myseq to mydict and from mydict to myseq? >>> myseq = ((0,1,2,3), (4,5,6,7), (8,9,10,11)) >>> mydict = {0: (1, 2, 3), 8: (9, 10, 11), 4: (5, 6, 7)}

    Read the article

  • In Django, how to create tables from an SQL file when syncdb is run

    - by Sidney
    Hi, How do I make syncdb execute SQL queries (for table creation) defined by me, rather then generating tables automatically. I'm looking for this solution as some particular models in my app represent SQL-table-views for a legacy-database table. So, I've created their SQL-views in my django-DB like this: CREATE VIEW legacy_series AS SELECT * FROM legacy.series; I have a reverse engineered model that represents the above view/legacytable. But whenever I run syncdb, I have to create all the views first by running sql scripts, otherwise syncdb simply creates tables for them (if a view is not found). How do I make syncdb run the above mentioned SQL?

    Read the article

  • downloading archives response corrupts files

    - by panchicore
    wrapper = FileWrapper(file("C:/pics.zip")) content_type = mimetypes.guess_type(result.files)[0] response = HttpResponse(wrapper, content_type=content_type) response['Content-Length'] = os.path.getsize("C:/pics.zip") response['Content-Disposition'] = "attachment; filename=pics.zip" return response pics.zip is a valid file with 3 pictures inside. server response the download, but when I am going to open the zip, winrar says This archive is either in unknown format or damaged! If I change the file path and the file name to a valid image C:/pic.jpg is downloaded damaged too. What Im missing in this download view?

    Read the article

  • How to set up Atana Studio 3 Themes in Pydev

    - by willy1234x1
    I've installed the Aptana Studio 3 preview and noticed it has support for themes (such as a bespin style or Ruby envy) and I'd love to use the Bespin one in Pydev but so far I've had no luck getting it to work, anyone have a clue as to how to get it to work? Video showing the themes in action.

    Read the article

  • pyplot: really slow creating heatmaps

    - by cvondrick
    I have a loop that executes the body about 200 times. In each loop iteration, it does a sophisticated calculation, and then as debugging, I wish to produce a heatmap of a NxM matrix. But, generating this heatmap is unbearably slow and significantly slow downs an already slow algorithm. My code is along the lines: import numpy import matplotlib.pyplot as plt for i in range(200): matrix = complex_calculation() plt.set_cmap("gray") plt.imshow(matrix) plt.savefig("frame{0}.png".format(i)) The matrix, from numpy, is not huge --- 300 x 600 of doubles. Even if I do not save the figure and instead update an on-screen plot, it's even slower. Surely I must be abusing pyplot. (Matlab can do this, no problem.) How do I speed this up?

    Read the article

  • Best way to get back to using the power of lxml after having to use a regex to find something in an

    - by PyNEwbie
    I am trying to rip some text out of a large number of html documents (numbers in the hundreds of thousands). The documents are really forms but they are prepared by a very large group of different organizations so there is significant variation in how they create the document. For example, the documents are divided into chapters. I might want to extract the contents of Chapter 5 from every document so I can analyze the content of the chapter. Initially I thought this would be easy but it turns out that the authors might use a set of non-nested tables throughout the document to hold the content so that Chapter n could be displayed using td tags inside a table. Or they might use other elements such as p tags H tags, div tags or any other block level element. After trying repeatedly to use lxml to help me identify the beginning and end of each chapter I have determined that it is a lot cleaner to use a regular expression because in every case, no matter what the enclosing html element is the chapter label is always in the form of >Chapter # It is a little more complicated in that there might be some white space or non-breaking space represented in different ways (  or   or just spaces). Nonetheless it was trivial to write a regular expression to identify the beginning of each section. (The beginning of one section is the end of the previous section.) But now I want to use lxml to get the text out. My thought is that I have really no choice but to walk along my string to find the close tag for the element that encloses the text I am using to find the relevant section. That is here is one example where the element holding the Chapter name is a div <div style="DISPLAY: block; MARGIN-LEFT: 0pt; TEXT-INDENT: 0pt; MARGIN-RIGHT: 0pt" align="left"><font style="DISPLAY: inline; FONT-WEIGHT: bold; FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman">Chapter 1.&#160;&#160;&#160;Our Beginnings.</font></div> So I am imagining that I would begin at the location where I found the match for chapter 1 and set up a regular expressions to find the next </div|</td|</p|</h1 . . . So at this point I have identified the type of element holding my chapter heading I can use the same logic to find all of the text that is within that element that is set up a regular expression to help me mark from >Chapter 1.&#160;&#160;&#160;Our Beginnings.< So I have identified where my Chapter 1 begins I can do the same for chapter 2 (which is where Chapter 1 ends) Now I am imagining that I am going to snip the document beginning at the opening of the element that I identified as the element the indicates where chapter 1 begins and ending just before the opening of the element that I identified as the element that indicates where Chapter 2 begins. The string that I have identified will then be fed to lxml to use its power to get the content. I am going to all of this trouble because I have read over and over - never use a regular expression to extract content from html documents and I have not hit on a way to be as accurate with lxml to identify the starting and ending locations for the text I want to extract. For example, I can never be certain that the subtitle of Chapter 1 is Our Beginnings it could be Our Red Canary. Let me say that I spent two solid days trying with lxml to be confident that I had the beginning and ending elements and I could only be accurate <60% of the time but a very short regular expression has given me better than 95% success. I have a tendency to make things more complicated than necessary so I am wondering if anyone has seen or solved a similar problems and if they had an approach (not the details mind you) that they would like to offer.

    Read the article

  • Dynamic Spacer in ReportLab

    - by ptikobj
    I'm automatically generating a PDF-file with Platypus that has dynamic content. This means that it might happen that the length of the text content (which is directly at the bottom of the pdf-file) may vary. However, it might happen that a page break is done in cases where the content is too long. This is because i use a "static" spacer: s = Spacer(width=0, height=23.5*cm) as i always want to have only one page, I somehow need to dynamically set the height of the Spacer, so that it takes the "rest" of the space that is on the page as its height. Now, how do i get the "rest" of height that is left on my page?

    Read the article

  • Convert args to flat list?

    - by Mark
    I know this is very similar to a few other questions, but I can't quite get this function to work correctly. def flatten(*args): return list(item for iterable in args for item in iterable) The output I'm looking for is: flatten(1) -> [1] flatten(1,[2]) -> [1, 2] flatten([1,[2]]) -> [1, 2] The current function, which I from another SO answer doesn't seem to produce correct results at all: >>> flatten([1,[2]]) [1, [2]] I wrote the following function which seems to work for 0 or 1 levels of nesting, but not deeper: def flatten(*args): output = [] for arg in args: if hasattr(arg, '__iter__'): output += arg else: output += [arg] return output

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >