Search Results

Search found 17845 results on 714 pages for 'python social auth'.

Page 385/714 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Added tagging to existing model, now how does its admin work?

    - by Oli
    I wanted to add a StackOverflow-style tag input to a blog model of mine. This is a model that has a lot of data already in it. class BlogPost(models.Model): # my blog fields try: tagging.register(BlogPost) except tagging.AlreadyRegistered: pass I thought that was all I needed so I went through my old database of blog posts (this is a newly ported blog) and copied the tags in. It worked and I could display tags and filter by tag. However, I just wrote a new BlogPost and realise there's no tag field there. Reading the documentation (coincidentally, dry enough to be used as an antiperspirant), I found the TagField. Thinking this would just be a manager-style layer over the existing tagging register, I added it. It complained about there not being a Tag column. I'd rather not denormalise on tags just to satisfy create an interface for inputting them. Is there a TagManager class that I can just set on the model? tags = TagManager() # or somesuch

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • What is going on with the "return fibonacci( number-1 ) + fibonacci( number-2 )"?

    - by user1478598
    I have problem understanding what the return fibonacci( number-1 ) + fibonacci( number-2 ) does in the following program: import sys def fibonacci( number ): if( number <= 2 ): return 1 else: return fibonacci( number-1 ) + fibonacci( number-2 ) The problem is that I can't imagine how this line works: return fibonacci( number-1 ) + fibonacci( number-2 ) Does the both of the "fibonacci( number-1 )" and "fibonacci( number-2 )" being processed at the same time? or the "fibonacci( number-1 )" is the first to be processed and then the second one? I only see that processing both of them would eventually return '1' so the last result I expect to see it is a '1 + 1' = '2' I would appreciate a lot, If someone can elaborately explain the process of its calculation. I think this is a very newb question but I can't really get a picture of its process.

    Read the article

  • PDB: exception when in console - full stack trace

    - by EoghanM
    When at the pdb console, entering a statement which causes an exception results in just a single line stack trace, e.g. (Pdb) someFunc() *** TypeError: __init__() takes exactly 2 arguments (1 given) However I'd like to figure out where exactly in someFunc the error originates. i.e. in this case, which class __init__ is attached to. Is there a way to get a full stack trace in Pdb?

    Read the article

  • cannot output a json encoded dict containing accents (noob inside)

    - by user296546
    Hi all, here is a fairly simple example wich is driving me nuts since a couple of days. Considering the following script: # -*- coding: utf-8 -* from json import dumps as json_dumps machaine = u"une personne émérite" print(machaine) output = {} output[1] = machaine jsonoutput = json_dumps(output) print(jsonoutput) The result of this from cli: une personne émérite {"1": "une personne \u00e9m\u00e9rite"} I don't understand why their such a difference between the two strings. i have been trying all sorts of encode, decode etc but i can't seem to be able to find the right way to do it. Does anybody has an idea ? Thanks in advance. Matthieu

    Read the article

  • Favorite Django Tips & Features?

    - by Haes
    Inspired by the question series 'Hidden features of ...', I am curious to hear about your favorite Django tips or lesser known but useful features you know of. Please, include only one tip per answer. Add Django version requirements if there are any.

    Read the article

  • How to get "paster request" to use config host value instead of localhost?

    - by mmartinez
    I'm trying to access my pylons application via cron job to send notifications to my users. The way I'm doing this is by running the application using something like: paster request myconfig.ini /maintenance/do In the actual controller I check for the "paste.command_request" to block public access. Everything works but the only problem is that within the notifications that I send to my users there is a link to their profile and the host is "localhost" which should instead be the domain name of the application. When the notifications are sent from within the served application (say, a user modifies their settings on the site) the notifications have the correct url. I am using mako to render my email tamplates and within the template I am using the "pylons.url" method with "qualified" set to "True". Am I missing something here? Thanks in advance.

    Read the article

  • Sort and limit queryset by comment count and date using queryset.extra() (django)

    - by thornomad
    I am trying to sort/narrow a queryset of objects based on the number of comments each object has as well as by the timeframe during which the comments were posted. Am using a queryset.extra() method (using django_comments which utilizes generic foreign keys). I got the idea for using queryset.extra() (and the code) from here. This is a follow-up question to my initial question yesterday (which shows I am making some progress). Current Code: What I have so far works in that it will sort by the number of comments; however, I want to extend the functionality and also be able to pass a time frame argument (eg, 7 days) and return an ordered list of the most commented posts in that time frame. Here is what my view looks like with the basic functionality in tact: import datetime from django.contrib.comments.models import Comment from django.contrib.contenttypes.models import ContentType from django.db.models import Count, Sum from django.views.generic.list_detail import object_list def custom_object_list(request, queryset, *args, **kwargs): '''Extending the list_detail.object_list to allow some sorting. Example: http://example.com/video?sort_by=comments&days=7 Would get a list of the videos sorted by most comments in the last seven days. ''' try: # this is where I started working on the date business ... days = int(request.GET.get('days', None)) period = datetime.datetime.utcnow() - datetime.timedelta(days=int(days)) except (ValueError, TypeError): days = None period = None sort_by = request.GET.get('sort_by', None) ctype = ContentType.objects.get_for_model(queryset.model) if sort_by == 'comments': queryset = queryset.extra(select={ 'count' : """ SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s """ % ( ctype.pk, queryset.model._meta.db_table, queryset.model._meta.pk.name ), }, order_by=['-count']).order_by('-count', '-created') return object_list(request, queryset, *args, **kwargs) What I've Tried: I am not well versed in SQL but I did try just to add another WHERE criteria by hand to see if I could make some progress: SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s AND submit_date='2010-05-01 12:00:00' But that didn't do anything except mess around with my sort order. Any ideas on how I can add this extra layer of functionality? Thanks for any help or insight.

    Read the article

  • Group Chat XMPP with Google App Engine

    - by David Shellabarger
    Google App Engine has a great XMPP service built in. One of the few limitations it has is that it doesn't support receiving messages from a group chat. That's the one thing I want to do with it. :( Can I run a 3rd party XMPP/Jabber server on App Engine that supports group chat? If so, which one?

    Read the article

  • socket.shutdown vs socket.close

    - by Jason Baker
    I recently saw a bit of code that looked like this (with sock being a socket object of course): sock.shutdown(socket.SHUT_RDWR) sock.close() What exactly is the purpose of calling shutdown on the socket and then closing it? If it makes a difference, this socket is being used for non-blocking IO.

    Read the article

  • JSON serialization of Google App Engine models

    - by user111677
    I've been search for quite a while with no success. My project isn't using Django, is there a simple way to serialize App Engine models (google.appengine.ext.db.Model) into JSON or do I need to write my own serializer? My model class is fairly simple. For instance: class Photo(db.Model): filename = db.StringProperty() title = db.StringProperty() description = db.StringProperty(multiline=True) date_taken = db.DateTimeProperty() date_uploaded = db.DateTimeProperty(auto_now_add=True) album = db.ReferenceProperty(Album, collection_name='photo') Thanks in advance.

    Read the article

  • How to run unittest for Django?

    - by photon
    I configured properties for my django project under pydev. I can run the django app under pydev or under console window. But I have problems to run unittest under pydev. I cannot run unittest for app under console window either. I guessed it's something related to run configurations of pydev, so I made several trials, but with no success. Once I got messages like this: ImportError: Could not import settings 'D:\django_projects\MyProject' (Is it on sys.path? Does it have syntax errors?): No module named D:\django_projects\MyProject ERROR: Module: MyUnittestFile could not be imported. Another time I got messages like this: ImportError: Could not import settings 'MyProject.settngs' (Is it on sys.path? Does it have syntax errors?): No module named settngs 'ERROR: Module: MyUnittestFile could not be imported. I use pydev 1.5.6 on eclipse and windows xp. Any ideas for this problem? Now I think it's not something related to pydev, thanks for Xavier Ho's suggestion.

    Read the article

  • How to call Twiter's Streaming/Filter Feed with urllib2/httplib?

    - by Simon
    Update: I switched this back from answered as I tried the solution posed in cogent Nick's answer and switched to Google's urlfetch: logging.debug("starting urlfetch for http://%s%s" % (self.host, self.url)) result = urlfetch.fetch("http://%s%s" % (self.host, self.url), payload=self.body, method="POST", headers=self.headers, allow_truncated=True, deadline=5) logging.debug("finished urlfetch") but unfortunately finished urlfetch is never printed - I see the timeout happen in the logs (it returns 200 after 5 seconds), but execution doesn't seem tor return. Hi All- I'm attempting to play around with Twitter's Streaming (aka firehose) API with Google App Engine (I'm aware this probably isn't a great long term play as you can't keep the connection perpetually open with GAE), but so far I haven't had any luck getting my program to actually parse the results returned by Twitter. Some code: logging.debug("firing up urllib2") req = urllib2.Request(url="http://%s%s" % (self.host, self.url), data=self.body, headers=self.headers) logging.debug("called urlopen for %s %s, about to call urlopen" % (self.host, self.url)) fobj = urllib2.urlopen(req) logging.debug("called urlopen") When this executes, unfortunately, my debug output never shows the called urlopen line printed. I suspect what's happening is that Twitter keeps the connection open and urllib2 doesn't return because the server doesn't terminate the connection. Wireshark shows the request being sent properly and a response returned with results. I tried adding Connection: close to my request header, but that didn't yield a successful result. Any ideas on how to get this to work? thanks -Simon

    Read the article

  • Convert list to sequence of variables

    - by wtzolt
    I was wondering if this was possible... I have a sequence of variables that have to be assigned to a do.something (a, b) a and b variables accordingly. Something like this: # # Have a list of sequenced variables. list = 2:90 , 1:140 , 3:-40 , 4:60 # # "Template" on where to assign the variables from the list. do.something (a,b) # # Assign the variables from the list in a sequence with possibility of "in between" functions like print and time.sleep() added. do.something (2,90) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (1,140) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (3,-40) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (4,60) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) Any ideas?

    Read the article

  • Reliable and fast way to convert a zillion ODT files in PDF?

    - by Marco Mariani
    I need to pre-produce a million or two PDF files from a simple template (a few pages and tables) with embedded fonts. Usually, I would stay low level in a case like this, and compose everything with a library like ReportLab, but I joined late in the project. Currently, I have a template.odt and use markers in the content.xml files to fill with data from a DB. I can smoothly create the ODT files, they always look rigth. For the ODT to PDF conversion, I'm using openoffice in server mode (and PyODConverter w/ named pipe), but it's not very reliable: in a batch of documents, there is eventually a point after which all the processed files are converted into garbage (wrong fonts and letters sprawled all over the page). Problem is not predictably reproducible (does not depend on the data), happens in OOo 2.3 and 3.2, in Ubuntu, XP, Server 2003 and Windows 7. My Heisenbug detector is ticking. I tried to reduce the size of batches and restarting OOo after each one; still, a small percentage of the documents are messed up. Of course I'll write about this on the Ooo mailing lists, but in the meanwhile, I have a delivery and lost too much time already. Where do I go? Completely avoid the ODT format and go for another template system. Suggestions? Anything that takes a few seconds to run is way too slow. OOo takes around a second and it sums to 15 days of processing time. I had to write a program for clustering the jobs over several clients. Keep the format but go for another tool/program for the conversion. Which one? There are many apps in the shareware or commercial repositories for windows, but trying each one is a daunting task. Some are too slow, some cannot be run in batch without buying it first, some cannot work from command line, etc. Open source tools tend not to reinvent the wheel and often depend on openoffice. Converting to an intermediate .DOC format could help to avoid the OOo bug, but it would double the processing time and complicate a task that is already too hairy. Try to produce the PDFs twice and compare them, discarding the whole batch if there's something wrong. Although the documents look equal, I know of no way to compare the binary content. Restart OOo after processing each document. it would take a lot more time to produce them it would lower the percentage of the wrong files, and make it very hard to identify them. Go for ReportLab and recreate the pages programmatically. This is the approach I'm going to try in a few minutes. Learn to properly format bulleted lists Thanks a lot.

    Read the article

  • Refining data stored in SQLite - how to join several contacts?

    - by Krab
    Problem background Imagine this problem. You have a water molecule which is in contact with other molecules (if the contact is a hydrogen bond, there can be 4 other molecules around my water). Like in the following picture (A, B, C, D are some other atoms and dots mean the contact). A B . . O / \ H H . . C D I have the information about all the dots and I need to eliminate the water in the center and create records describing contacts of A-C, A-D, A-B, B-C, B-D, and C-D. Database structure Currently, I have the following structure in the database: Table atoms: "id" integer PRIMARY KEY, "amino" char(3) NOT NULL, (HOH for water or other value) other columns identifying the atom Table contacts: "acceptor_id" integer NOT NULL, (the atom near to my hydrogen, here C or D) "donor_id" integer NOT NULL, (here A or B) "directness" char(1) NOT NULL, (this should be D for direct and W for water-mediated) other columns about the contact, such as the distance Current solution (insufficient) Now, I'm going through all the contacts which have donor.amino = "HOH". In this sample case, this would select contacts from C and D. For each of these selected contacts, I look up contacts having the same acceptor_id as is the donor_id in the currently selected contact. From this information, I create the new contact. At the end, I delete all contacts to or from HOH. This way, I am obviously unable to create C-D and A-B contacts (the other 4 are OK). If I try a similar approach - trying to find two contacts having the same donor_id, I end up with duplicate contacts (C-D and D-C). Is there a simple way to retrieve all six contacts without duplicates? I'm dreaming about some one page long SQL query which retrievs just these six wanted rows. :-) It is preferable to conserve information about who is donor where possible, but not strictly necessary. Big thanks to all of you who read this question to this point.

    Read the article

  • Displaying Google Calendar event data on FullCalendar

    - by aurealus
    I am using Google Calendar as a storage engine for a calendar system I am building, however, I am using a single Google user account with multiple calendars, i.e. each user on my system has their own calendar within the one user account. I'm able to create a calendar per user just fine, but I would like to have FullCalendar retrieve the events for display purposes, without manually getting the magic cookie url from Google Calendar settings. I would like to be able to retrieve it programmatically or 'proxy' the feed via an authenticated call to get event data that I'm doing in Django. $('#calendar').fullCalendar({ events: $.fullCalendar.gcalFeed( "http://www.google.com/calendar_url/" <-- or /my/event/feed/url ) });

    Read the article

  • NumPy: how to quickly normalize many vectors?

    - by EOL
    How can a list of vectors be elegantly normalized, in NumPy? Here is an example that does not work: from numpy import * vectors = array([arange(10), arange(10)]) # All x's, then all y's norms = apply_along_axis(linalg.norm, 0, vectors) # Now, what I was expecting would work: print vectors.T / norms # vectors.T has 10 elements, as does norms, but this does not work The last operation yields "shape mismatch: objects cannot be broadcast to a single shape". How can the normalization of the 2D vectors in vectors be elegantly done, with NumPy? Edit: Why does the above not work while adding a dimension to norms does work (as per my answer below)?

    Read the article

  • How to limit choice field options based on another choice field in django admin

    - by umnik700
    I have the following models: class Category(models.Model): name = models.CharField(max_length=40) class Item(models.Model): name = models.CharField(max_length=40) category = models.ForeignKey(Category) class Demo(models.Model): name = models.CharField(max_length=40) category = models.ForeignKey(Category) item = models.ForeignKey(Item) In the admin interface when creating a new Demo, after user picks category from the dropdown, I would like to limit the number of choices in the "items" drop-down. If user selects another category then the item choices should update accordingly. I would like to limit item choices right on the client, before it even hits the form validation on the server. This is for usability, because the list of items could be 1000+ being able to narrow it down by category would help to make it more manageable. Is there a "django-way" of doing it or is custom JavaScript the only option here?

    Read the article

  • Display additional data while iterating over a Django formset

    - by Jannis
    Hi, I have a list of soccer matches for which I'd like to display forms. The list comes from a remote source. matches = ["A vs. B", "C vs. D", "E vs, F"] matchFormset = formset_factory(MatchForm,extra=len(matches)) formset = MatchFormset() On the template side, I would like to display the formset with the according title (i.e. "A vs. B"). {% for form in formset.forms %} <fieldset> <legend>{{TITLE}}</legend> {{form.team1}} : {{form.team2}} </fieldset> {% endfor %} Now how do I get TITLE to contain the right title for the current form? Or asked in a different way: how do I iterate over matches with the same index as the iteration over formset.forms? Thanks for your input!

    Read the article

  • Django model field value preprocessing before returning

    - by Satoru.Logic
    Hi, all. I have a Note model class like this: class Note(models.Model): author = models.ForeignKey(User, related_name='notes') content = NoteContentField(max_length=256) NoteContentField is a custom sub-class of CharField that override the to_python method in purpose of doing some twitter-text-conversion processing. class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): value = super(NoteContentField, self).to_python(value) from ..utils import linkify return mark_safe(linkify(value)) However, this doesn't work. When I save a Note object like this: note = Note(author=request.use, content=form.cleaned_data['content']) The conversed value is saved into the database, which is not what I wanna see. Would you please tell me what's wrong with this? Thanks in advance.

    Read the article

  • How to obtain the root of a tree without parsing the entire file?

    - by Matt.
    I'm making an xml parser to parse xml reports from different tools, and each tool generates different reports with different tags. For example: Arachni generates an xml report with <arachni_report></arachni_report> as tree root tag. nmap generates an xml report with <nmaprun></nmaprun> as tree root tag. I'm trying not to parse the entire file unless it's a valid report from any of the tools I want. First thing I thought to use was ElementTree, parse the entire xml file (supposing it contains valid xml), and then check based on the tree root if the report belongs to Arachni or nmap. I'm currently using cElementTree, and as far as I know getroot() is not an option here, but my goal is to make this parser to operate with recognized files only, without parsing unnecessary files. By the way, I'm Still learning about xml parsing, thanks in advance.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >