Search Results

Search found 13683 results on 548 pages for 'python sphinx'.

Page 388/548 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • django join-like expansion of queryset

    - by jimbob
    I have a list of Persons each which have multiple fields that I usually filter what's upon, using the object_list generic view. Each person can have multiple Comments attached to them, each with a datetime and a text string. What I ultimately want to do is have the option to filter comments based on dates. class Person(models.Model): name = models.CharField("Name", max_length=30) ## has ~30 other fields, usually filtered on as well class Comment(models.Model): date = models.DateTimeField() person = models.ForeignKey(Person) comment = models.TextField("Comment Text", max_length=1023) What I want to do is get a queryset like Person.objects.filter(comment__date__gt=date(2011,1,1)).order_by('comment__date') send that queryset to object_list and be able to only see the comments ordered by date with only so many objects on a page. E.g., if "Person A" has comments 12/3/11, 1/2/11, 1/5/11, "Person B" has no comments, and person C has a comment on 1/3, I would see: "Person A", 1/2 - comment "Person C", 1/3 - comment "Person A", 1/5 - comment I would strongly prefer not to have to switch to filtering based on Comments.objects.filter(), as that would make me have to largely repeat large sections of code in the both the view and template. Right now if I tried executing the following command, I will get a queryset returning (PersonA, PersonC, PersonA), but if I try rendering that in a template each persons comment_set will contain all their comments even if they aren't in the date range. Ideally they're would be some sort of functionality where I could expand out a Person queryset's comment_set into a larger queryset that can be sorted and ordered based on the comment and put into a object_list generic view. This normally is fairly simple to do in SQL with a JOIN, but I don't want to abandon the ORM, which I use everywhere else.

    Read the article

  • combine lines from 2 prints to single line and insert into mysql database

    - by bleomycin
    Hello everyone i currently have this: import feedparser d = feedparser.parse('http://store.steampowered.com/feeds/news.xml') for i in range(10): print d.entries[i].title print d.entries[i].date How would i go about making it so that the title and date are on the same line? Also it doesn't need to print i just have that in there for testing, i would like to dump this output into a mysql db with the title and date, any help is greatly appreciated!

    Read the article

  • Django dictionary in templates: Grab key from another objects attribute

    - by Jordan Messina
    I have a dictionary called number_devices I'm passing to a template, the dictionary keys are the ids of a list of objects I'm also passing to the template (called implementations). I'm iterating over the list of objects and then trying to use the object.id to get a value out of the dict like so: {% for implementation in implementations %} {{ number_devices.implementation.id }} {% endfor %} Unfortunately number_devices.implementation is evaluated first, then the result.id is evaluated obviously returning and displaying nothing. I can't use parentheses like: {{ number_devices.(implementation.id) }} because I get a parse error. How do I get around this annoyance in Django templates? Thanks for any help!

    Read the article

  • How do I compare two complex data structures?

    - by Phil H
    I have some nested datastructures, each something like: [ ('foo', [ {'a':1, 'b':2}, {'a':3.3, 'b':7} ]), ('bar', [ {'a':4, 'd':'efg', 'e':False} ]) ] I need to compare these structures, to see if there are any differences. Short of writing a function to explicitly walk the structure, is there an existing library or method of doing this kind of recursive comparison?

    Read the article

  • Problem opening Solr *.jsp pages with urllib2.urlopen.

    - by nestling
    I'm trying to open a page at http://localhost:8983/solr/admin/stats.jsp but urllib2.urlopen returns a blank string. It works fine for solr/ and solr/admin, but for all the pages above /solr/admin/ I get nothing but a blank string. 76]: t = urllib2.urlopen('http://localhost:8983/solr/admin/stats.jsp') 77]: s = t.read() 78]: s 78]: 79]: type(s) 79]: <type 'str'> 80]: urllib2.urlopen('http://localhost:8983/solr/admin/registry.jsp').read() 80]: In [84]: urllib2.urlopen('http://localhost:8983/solr/admin/schema.jsp').read() Out[84]: I know this isn't a problem with urllib2, but beyond that I am at a loss. I wish solr (or jetty) had an easy to get to log file, so that perhaps it could tell me its side of the story.

    Read the article

  • Django extending user model and displaying form

    - by MichalKlich
    Hello, I am writing website and i`d like to implement profile managment. Basic thing would be to edit some of user details by themself, like first and last name etc. Now, i had to extend User model to add my own stuff, and email address. I am having troubles with displaying form. Example will describe better what i would like achieve. This is mine extended user model. class UserExtended(models.Model): user = models.ForeignKey(User, unique=True) kod_pocztowy = models.CharField(max_length=6,blank=True) email = models.EmailField() This is how my form looks like. class UserCreationFormExtended(UserCreationForm): def __init__(self, *args, **kwargs): super(UserCreationFormExtended, self).__init__(*args, **kwargs) self.fields['email'].required = True self.fields['first_name'].required = False self.fields['last_name'].required = False class Meta: model = User fields = ('username', 'first_name', 'last_name', 'email') It works fine when registering, as i need allow users to put username and email but when it goes to editing profile it displays too many fields. I would not like them to be able to edit username and email. How could i disable fields in form? Thanks for help.

    Read the article

  • failure on creating a Scikits.TimeSeries object

    - by user311906
    Hi All I am trying to create a scikit.timeseries object starting from 2 datetime objects. If I understood correctly it should be possible to create a scikits.timeseries starting from datetime objects. I try the following code but it says that Insufficient parameters. The 2 datetime differs for few microseconds. In this case what should be the value for freq parameter? Is what I am trying allowed? In theory, since timeseries can be based on datetime objects it should be possible to hanlde up to microsecond , is this correct? I think that this is not really clear to me. Regards Eo import datetime import sckilits.timeseries as ts tm1 = datetime.datetime( 2010,1,1, 10,10,2, 123456 ) tm2 = datetime.datetime( 2010,1,1, 10,10,2, 345678 ) d = [ tm1, tm2 ] tseries = ts.time_series( dates=d ) tseries = ts.time_series( d )

    Read the article

  • Keeping filters in Django Admin

    - by Tomo
    What I would like to achive is: I go to admin site, apply some filters to the list of objects I click and object edit, edit, edit, hit 'Save' Site takes me to the list of objects... unfiltered. I'd like to have the filter from step 1 remembered and applied. Is there an easy way to do it?

    Read the article

  • Simple App Engine Sessions Implementation

    - by raz0r
    Here is a very basic class for handling sessions on App Engine: """Lightweight implementation of cookie-based sessions for Google App Engine. Classes: Session """ import os import random import Cookie from google.appengine.api import memcache _COOKIE_NAME = 'app-sid' _COOKIE_PATH = '/' _SESSION_EXPIRE_TIME = 180 * 60 class Session(object): """Cookie-based session implementation using Memcached.""" def __init__(self): self.sid = None self.key = None self.session = None cookie_str = os.environ.get('HTTP_COOKIE', '') self.cookie = Cookie.SimpleCookie() self.cookie.load(cookie_str) if self.cookie.get(_COOKIE_NAME): self.sid = self.cookie[_COOKIE_NAME].value self.key = 'session-' + self.sid self.session = memcache.get(self.key) if self.session: self._update_memcache() else: self.sid = str(random.random())[5:] + str(random.random())[5:] self.key = 'session-' + self.sid self.session = dict() memcache.add(self.key, self.session, _SESSION_EXPIRE_TIME) self.cookie[_COOKIE_NAME] = self.sid self.cookie[_COOKIE_NAME]['path'] = _COOKIE_PATH print self.cookie def __len__(self): return len(self.session) def __getitem__(self, key): if key in self.session: return self.session[key] raise KeyError(str(key)) def __setitem__(self, key, value): self.session[key] = value self._update_memcache() def __delitem__(self, key): if key in self.session: del self.session[key] self._update_memcache() return None raise KeyError(str(key)) def __contains__(self, item): try: i = self.__getitem__(item) except KeyError: return False return True def _update_memcache(self): memcache.replace(self.key, self.session, _SESSION_EXPIRE_TIME) I would like some advices on how to improve the code for better security. Note: In the production version it will also save a copy of the session in the datastore. Note': I know there are much more complete implementations available online though I would like to learn more about this subject so please don't answer the question with "use that" or "use the other" library.

    Read the article

  • Pyqt - QMenu dynamically populated and clicked

    - by mleep
    I need to be able to know what item I've clicked in a dynamically generated menu system. I only want to know what I've clicked on, even if it's simply a string representation. def populateShotInfoMenus(self): self.menuFilms = QMenu() films = self.getList() for film in films: menuItem_Film = self.menuFilms.addAction(film) self.connect(menuItem_Film, SIGNAL('triggered()'), self.onFilmSet) self.menuFilms.addAction(menuItem_Film) def onFilmRightClick(self, value): self.menuFilms.exec_(self.group1_inputFilm.mapToGlobal(value)) def onFilmSet(self, value): print 'Menu Clicked ', value

    Read the article

  • How to change the OSX menubar in wxPython without any opened window?

    - by gyim
    I am writing a wxPython application that remains open after closing all of its windows - so you can still drag & drop new files onto the OSX dock icon (I do this with myApp.SetExitOnFrameDelete(False)). Unfortunately if I close all the windows, the OSX menubar will only contain a "Help" menu. I would like to add at least a File/Open menu item, or just keep the menubar of the main window. Is this somehow possible in wxPython? In fact, I would be happy with a non-wxPython hack as well (for example, setting the menu in pyobjc). wxPython development in OSX is such a hack anyway ;)

    Read the article

  • Problem with SQLite executemany

    - by Strider1066
    I can't find my error in the following code. When it is run a type error is given for line: cur.executemany(sql % itr.next()) = 'function takes exactly 2 arguments (1 given), import sqlite3 con = sqlite3.connect('test.sqlite') cur = con.cursor() cur.execute("create table IF NOT EXISTS fred (dat)") def newSave(className, fields, objData): sets = [] itr = iter(objData) if len(fields) == 1: sets.append( ':' + fields[0]) else: for name in fields: sets.append( ':' + name) if len(sets)== 1: colNames = sets[0] else: colNames = ', '.join(sets) sql = " '''insert into %s (%s) values(%%s)'''," % (className, colNames) print itr.next() cur.executemany(sql % itr.next()) con.commit() if __name__=='__main__': newSave('fred', ['dat'], [{'dat':1}, {'dat':2}, { 'dat':3}, {'dat':4}]) I would appreciate your thoughts.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • virtualenvwrapper .hook problem

    - by Wraith
    I've used virtualenvwrapper, but I'm having problems running it on a new computer. My .bashrc file is updated per the instructions: export WORKON_HOME=$DEV_HOME/projects source /usr/local/bin/virtualenvwrapper.sh But when source is run, I get the following: bash: /25009.hook: Permission denied bash: /25009.hook: No such file or directory This previous post leads me to believe the filename is being recycled and locked because virtualenvwrapper.sh uses $$. Is there any way to fix this?

    Read the article

  • more efficient way to pickle a string

    - by gatoatigrado
    The pickle module seems to use string escape characters when pickling; this becomes inefficient e.g. on numpy arrays. Consider the following z = numpy.zeros(1000, numpy.uint8) len(z.dumps()) len(cPickle.dumps(z.dumps())) The lengths are 1133 characters and 4249 characters respectively. z.dumps() reveals something like "\x00\x00" (actual zeros in string), but pickle seems to be using the string's repr() function, yielding "'\x00\x00'" (zeros being ascii zeros). i.e. ("0" in z.dumps() == False) and ("0" in cPickle.dumps(z.dumps()) == True)

    Read the article

  • Counting amount of items in Pythons 'for'

    - by Markum
    Kind of hard to explain, but when I run something like this: fruits = ['apple', 'orange', 'banana', 'strawberry', 'kiwi'] for fruit in fruits: print fruit.capitalize() It gives me this, as expected: Apple Orange Banana Strawberry Kiwi How would I edit that code so that it would "count" the amount of times it's performing the for, and print this? 1 Apple 2 Orange 3 Banana 4 Strawberry 5 Kiwi

    Read the article

  • Finding a list of indices from master array using secondary array with non-unique entries

    - by fideli
    I have a master array of length n of id numbers that apply to other analogous arrays with corresponding data for elements in my simulation that belong to those id numbers (e.g. data[id]). Were I to generate a list of id numbers of length m separately and need the information in the data array for those ids, what is the best method of getting a list of indices idx of the original array of ids in order to extract data[idx]? That is, given: a=numpy.array([1,3,4,5,6]) # master array b=numpy.array([3,4,3,6,4,1,5]) # secondary array I would like to generate idx=numpy.array([1,2,1,4,2,0,3]) The array a is typically in sequential order but it's not a requirement. Also, array b will most definitely have repeats and will not be in any order. My current method of doing this is: idx=numpy.array([numpy.where(a==bi)[0][0] for bi in b]) I timed it using the following test: a=(numpy.random.uniform(100,size=100)).astype('int') b=numpy.repeat(a,100) timeit method1(a,b) 10 loops, best of 3: 53.1 ms per loop Is there a better way of doing this?

    Read the article

  • Increasing figure size in Matplotlib

    - by Anirudh
    I am trying to plot a graph from a distance matrix. The code words fine and gives me a image in 800 * 600 pixels. The image being too small, All the nodes are packed together. I want increase the size of the image. so I added the following line to my code - figure(num=None, figsize=(10, 10), dpi=80, facecolor='w', edgecolor='k') After this all I get is a blank 1000 * 1000 image file. My overall code - import networkx as nx import pickle import matplotlib.pyplot as plt print "Reading from pickle." p_file = open('pickles/names') Names = pickle.load(p_file) p_file.close() p_file = open('pickles/distance') Dist = pickle.load(p_file) p_file.close() G = nx.Graph() print "Inserting Nodes." for n in Names: G.add_node(n) print "Inserting Edges." for i in range(601): for j in range(601): G.add_edge(Names[i],Names[j],weight=Dist[i][j]) print "Drawing Graph." nx.draw(G) print "Saving Figure." #plt.figure(num=None, figsize=(10, 10)) plt.savefig('new.png') print "Success!"

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >