Search Results

Search found 15380 results on 616 pages for 'man with python'.

Page 395/616 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Keeping filters in Django Admin

    - by Tomo
    What I would like to achive is: I go to admin site, apply some filters to the list of objects I click and object edit, edit, edit, hit 'Save' Site takes me to the list of objects... unfiltered. I'd like to have the filter from step 1 remembered and applied. Is there an easy way to do it?

    Read the article

  • Possible to change function name in definition?

    - by Bird Jaguar IV
    I tried several ways to change the function name in the definition, but they failed. >>> def f(): pass >>> f.__name__ 'f' >>> def f(): f.__name__ = 'new name' >>> f.__name__ 'f' >>> def f(): self.__name__ = 'new name' >>> f.__name__ 'f' But I can change the name attribute after defining it. >>> def f(): pass >>> f.__name__ = 'new name' >>> f.__name__ 'new name' Any way to change/set it in the definition (other than using a decorator)?

    Read the article

  • How can I make a dashboard with all pending tasks using Celery?

    - by e-satis
    I want to have some place where I can watch all the pendings tasks. I'm not talking about the registered functions/classes as tasks, but the actual scheduled jobs for which I could display: name, task_id, eta, worker, etc. Using Celery 2.0.2 and djcelery, I found `inspect' in the documentation. I tried: from celery.task.control import inspect def get_scheduled_tasks(nodes=None): if nodes: i = inspect(nodes) else: i = inspect() scheduled_tasks = [] dump = i.scheduled() if dump: for worker, tasks in dump: for task in tasks: scheduled_task = {} scheduled_task.update(task["request"]) del task["request"] scheduled_task.update(task) scheduled_task["worker"] = worker scheduled_tasks.append(scheduled_task) return scheduled_tasks But it hangs forever on dump = i.scheduled(). Strange, because otherwise everything works. Using Ubuntu 10.04, django 1.0 and virtualenv.

    Read the article

  • Saving a Django form with a Many2Many field with through table

    - by PhilGo20
    So I have this model with multiple Many2Many relationship. 2 of those (EventCategorizing and EventLocation are through tables/intermediary models) class Event(models.Model): """ Event information for Way-finding and Navigator application""" categories = models.ManyToManyField('EventCategorizing', null=True, blank=True, help_text="categories associated with the location") #categories associated with the location images = models.ManyToManyField(KMSImageP, null=True, blank=True) #images related to the event creator = models.ForeignKey(User, verbose_name=_('creator'), related_name="%(class)s_created") locations = models.ManyToManyField('EventLocation', null=True, blank=True) In my view, I first need to save the creator as the request user, so I use the commit=False parameter to get the form values. if event_form.is_valid(): event = event_form.save(commit=False) #we save the request user as the creator event.creator = request.user event.save() event = event_form.save_m2m() event.save() I get the following error: *** TypeError: 'EventCategorizing' instance expected I can manually add the M2M relationship to my "event" instance, but I am sure there is a simpler way. Am I missing on something ?

    Read the article

  • Django - partially validating form

    - by aeter
    I'm new to Django, trying to process some forms. I have this form for entering information (creating a new ad) in one template: class Ad(models.Model): ... category = models.CharField("Category",max_length=30, choices=CATEGORIES) sub_category = models.CharField("Subcategory",max_length=4, choices=SUBCATEGORIES) location = models.CharField("Location",max_length=30, blank=True) title = models.CharField("Title",max_length=50) ... I validate it with "is_valid()" just fine. Basically for the second validation (another template) I want to validate only against "category" and "sub_category": In another template, I want to use 2 fields from the same form ("category" and "sub_category") for filtering information - and now the "is_valid()" method would not work correctly, cause it validates the entire form, and I need to validate only 2 fields. I have tried with the following: ... if request.method == 'POST': # If a filter for data has been submitted: form = AdForm(request.POST) try: form = form.clean() category = form.category sub_category = form.sub_category latest_ads_list = Ad.objects.filter(category=category) except ValidationError: latest_ads_list = Ad.objects.all().order_by('pub_date') else: latest_ads_list = Ad.objects.all().order_by('pub_date') form = AdForm() ... but it doesn't work. How can I validate only the 2 fields category and sub_category?

    Read the article

  • Lucene: Fastest way to return the document occurance of a phrase?

    - by dont say the kid's name
    Hi Guys, I am trying to use Lucene (actually PyLucene!) to find out how many documents contain my exact phrase. My code currently looks like this... but it runs rather slow. Does anyone know a faster way to return document counts? phraseList = ["some phrase 1", "some phrase 2"] #etc, a list of phrases... countsearcher = IndexSearcher(SimpleFSDirectory(File(STORE_DIR)), True) analyzer = StandardAnalyzer(Version.LUCENE_CURRENT) for phrase in phraseList: query = QueryParser(Version.LUCENE_CURRENT, "contents", analyzer).parse("\"" + phrase + "\"") scoreDocs = countsearcher.search(query, 200).scoreDocs print "count is: " + str(len(scoreDocs))

    Read the article

  • Can I db.put models without db.getting them first?

    - by Liron
    I tried to do something like ss = Screenshot(key=db.Key.from_path('myapp_screenshot', 123), name='flowers') db.put([ss, ...]) It seems to work on my dev_appserver, but on live I get this traceback: 05-07 09:50PM 19.964 File "/base/data/home/apps/quixeydev3/12.341796548761906563/common/appenginepatch/appenginepatcher/patch.py", line 600, in put E 05-07 09:50PM 19.964 result = old_db_put(models, *args, **kwargs) E 05-07 09:50PM 19.964 File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/init.py", line 1278, in put E 05-07 09:50PM 19.964 keys = datastore.Put(entities, rpc=rpc) E 05-07 09:50PM 19.964 File "/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py", line 284, in Put E 05-07 09:50PM 19.965 raise _ToDatastoreError(err) E 05-07 09:50PM 19.965 InternalError: the new entity or index you tried to insert already exists I happen to know just the ID of an existing Screenshot entity I want to update; that's why I was manually constructing its key. Am I doing it wrong?

    Read the article

  • Why does SQLAlchemy with psycopg2 use_native_unicode have poor performance?

    - by Bob Dover
    I'm having a difficult time figuring out why a simple SELECT query is taking such a long time with sqlalchemy using raw SQL (I'm getting 14600 rows/sec, but when running the same query through psycopg2 without sqlalchemy, I'm getting 38421 rows/sec). After some poking around, I realized that toggling sqlalchemy's use_native_unicode parameter in the create_engine call actually makes a huge difference. This query takes 0.5secs to retrieve 7300 rows: from sqlalchemy import create_engine engine = create_engine("postgresql+psycopg2://localhost...", use_native_unicode=True) r = engine.execute("SELECT * FROM logtable") fetched_results = r.fetchall() This query takes 0.19secs to retrieve the same 7300 rows: engine = create_engine("postgresql+psycopg2://localhost...", use_native_unicode=False) r = engine.execute("SELECT * FROM logtable") fetched_results = r.fetchall() The only difference between the 2 queries is use_native_unicode. But sqlalchemy's own docs state that it is better to keep use_native_unicode=True (http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html). Does anyone know why use_native_unicode is making such a big performance difference? And what are the ramifications of turning off use_native_unicode?

    Read the article

  • How do I specify a null relation in SQLAlchemy?

    - by Jesse
    Not sure what the correct title for this question should be. I have the following schema: Matters have a one-many relationship to WorkItems. WorkItems have a one-one (or one-zero) relationship to LineItems. I am trying to create the following relation between Matters and WorkItems Matter.unbilled_work_items = orm.relation(WorkItem, primaryjoin = (Matter.id == WorkItem.matter_id) and (WorkItem.line_item_id == None), foreign_keys = [WorkItem.matter_id, WorkItem.line_item_id], viewonly=True ) This throws: AttributeError: '_Null' object has no attribute 'table' That seems to be saying that the second clause in the primaryjoin returns an object of type _Null, but it seems to be expecting something with a "table" attribute. This seems like it should be pretty straightforward to me, am I missing something obvious?

    Read the article

  • Interrupting `while loop` with keyboard in Cython

    - by linello
    I want to be able to interrupt a long function with cython, using the usual CTRL+C interrupt command. My C++ long function is repeatedly called inside a while loop from Cython code, but I want to be able, during the loop, to send an "interrupt" and block the while loop. The interrupt also should wait the longFunction() to finish, so that no data are lost or kept in unknown status. This is one of my first implementation, which obviously doesn't work: computed=0; print "Computing long function..." while ( computed==0 ): try: computed = self.thisptr.aLongFunction() except (KeyboardInterrupt, SystemExit): computed=1 print '\n! Received keyboard interrupt.\n' break; (p.s. self.thisptr is the pointer to the current class which implements aLongFunction() )

    Read the article

  • socket.error: [Errno 10054]

    - by C0d3r
    import socket, sys if len(sys.argv) !=3 : print "Usage: ./supabot.py <host> <port>" sys.exit(1) irc = sys.argv[1] port = int(sys.argv[2]) sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sck.connect((irc, port)) sck.send('NICK supaBOT\r\n') sck.send('USER supaBOT supaBOT supaBOT :supaBOT Script\r\n') sck.send('JOIN #darkunderground' + '\r\n') data = '' while True: data = sck.recv(1024) if data.find('PING') != -1: sck.send('PONG ' + data.split() [1] + '\r\n') print data elif data.find('!info') != -1: sck.send('PRIVMSG #darkunderground supaBOT v1.0 by sourD' + '\r\n') print sck.recv(1024) when I run this code I get this error.. socket.error: [Errno 10054] An existing connection was forcibly closed by the remote host it says that the error is in line 16, in data = sck.recv(1024)

    Read the article

  • Wrong values reported by pyPDF for various box regions

    - by romor
    Using pyPdf, for most files I get matched results concerning various box's dimensions compared to what Acrobat reports. However for some files I get different values reported by pyPdf and Acrobat, like: pyPdf: artBox: 595.3 x 841.9 bleedBox: 595.3 x 841.9 cropBox: 595.3 x 841.9 trimBox: 517.3 x 754 Acrobat: artBox: 439.35 x 666.13 pt bleedBox: 439.35 x 666.13 pt cropBox: 439.35 x 666.13 pt trimBox: 439.35 x 666.13 pt I thought it's units issue, but then ratio between widths and heights doesn't match also, not mentioning trimBox mismatch Correct results are those reported by Acrobat of course. Does someone know why is this and is there a way I get correct dimensions by using pyPdf? Thanks couple of minutes later... After reading this question: Are PDF box coordinates relative or absolute? I figured I didn't considered uper left corner to be different then 0 (zero). It turned out that box starts at 77.95 x 87.87, so if we reduce reported values of trimBox by this values correct result is obtained. artBox: 0 x 0 bleedBox: 0 x 0 cropBox: 0 x 0 trimBox: 77.95 x 87.87 Other boxes seem with misleading values or I misinterpret them. Snippet: from pyPdf import PdfFileReader pdfread = PdfFileReader(file('my.pdf', 'rb')) page = 1 width = pdfread.getPage(page).trimBox[2]-pdfread.getPage(page).trimBox[0] height = pdfread.getPage(page).trimBox[3] - pdfread.getPage(page).trimBox[1] print width, height

    Read the article

  • Any way to set or overwrite the __line__ and __file__ metadata?

    - by charles.merriam
    I'm writing some code that needs to change function signatures. Right now, I'm using Simionato's FunctionMaker class, which uses the (hacky) inspect module, and does a compile. Unfortunately, this still loses the line and file metadata. Does anyone know: If it is possible to overwrite these values in some odd way? If hacking up a class with a complex getattribute() to intercept the values and also try to make the class looks like a function is any more possible than a moose with a flying nun hat? Is there an alternative to the (hacky) inspect module? PEP 362 is dead dead dead? I know decorators and cPickle users fight with this. What other situations is the read only metadata in people's way? I appreciate any insights. Thank you.

    Read the article

  • How to stream an HttpResponse with Django

    - by muudscope
    I'm trying to get the 'hello world' of streaming responses working for Django (1.2). I figured out how to use a generator and the yield function. But the response still not streaming. I suspect there's a middleware that's mucking with it -- maybe ETAG calculator? But I'm not sure how to disable it. Can somebody please help? Here's the "hello world" of streaming that I have so far: def stream_response(request): resp = HttpResponse( stream_response_generator()) return resp def stream_response_generator(): for x in range(1,11): yield "%s\n" % x # Returns a chunk of the response to the browser time.sleep(1)

    Read the article

  • Django context processor gets AnonymousUser

    - by myfreeweb
    instead of User. def myview(request): return render_to_response('tmpl.html', {'user': User.objects.get(id=1}) works fine and passes User to template. But def myview(request): return render_to_response('tmpl.html', {}, context_instance=RequestContext(request)) with a context processor def user(request): from django.contrib.auth.models import User return {'user': User.objects.get(id=1)} passes AnonymousUser, so I can't get the variables I need :( What's wrong?

    Read the article

  • Filter objects within two seconds of one another using SQLAlchemy

    - by Arrieta
    Hello: I have two tables with a column 'date'. One holds (name, date) and the other holds (date, p1, p2). Given a name, I want to use the date in table 1 to query p1 and p2 from table two; the match should happen if date in table one is within two seconds of date in table two. How can you accomplish this using SQLAlchemy? I've tried (unsuccessfully) to use the between operator and with a clause like: td = datetime.timedelta(seconds=2) q = session.query(table1, table2).filter(table1.name=='my_name').\ filter(between(table1.date, table2.date - td, table2.date + td)) Any thoughts?

    Read the article

  • Django startup importing causes reverse to happen

    - by nicknack
    This might be an isolated problem, but figured I'd ask in case someone has thoughts on a graceful approach to address it. Here's the setup: -------- views.py -------- from django.http import HttpResponse import shortcuts def mood_dispatcher(request): mood = magic_function_to_guess_my_mood(request) return HttpResponse('Please go to %s' % shortcuts.MOODS.get(mood, somedefault)) ------------ shortcuts.py ------------ MOODS = # expensive load that causes a reverse to happen The issue is that shortcuts.py causes an exception to be thrown when a reverse is attempted before django is done building the urls. However, views.py doesn't yet need to import shortcuts.py (used only when mood_dispatcher is actually called). Obvious initial solutions are: 1) Import shortcuts inline (just not very nice stylistically) 2) Make shortcuts.py build MOODS lazily (just more work) What I ideally would like is to be able to say, at the top of views.py, "import shortcuts except when loading urls"

    Read the article

  • Attribute Address getting displayed instead of Attribute Value

    - by Manish
    I am try to create the following. I want to have one drop down menu. Depending on the option selected in the first drop down menu, options in second drop down menu will be displayed. The options in 2nd drop down menu is supposed by dynamic, i.e., options change with the change of values in first menu. Here, instead of getting the drop down menus, I am getting the following Choose your Option1: Choose your Option2: Note: I strictly don't want to use javascript. home_form.py class HomeForm(forms.Form): def __init__(self, *args, **kwargs): var_filter_con = kwargs.pop('filter_con', None) super(HomeForm, self).__init__(*args, **kwargs) if var_filter_con == '***': var_empty_label = None else: var_empty_label = ' ' self.option2 = forms.ModelChoiceField(queryset = db_option2.objects.filter(option1_id = var_filter_con).order_by("name"), empty_label = var_empty_label, widget = forms.Select(attrs={"onChange":'this.form.submit();'}) ) self.option1 = forms.ModelChoiceField(queryset = db_option1.objects.all().order_by("name"), empty_label=None, widget=forms.Select(attrs={"onChange":'this.form.submit();'}) ) view.py def option_view(request): if request.method == 'POST': form = HomeForm(request.POST) if form.is_valid(): cd = form.cleaned_data if cd.has_key('option1'): f = HomeForm(filter_con = cd.get('option1')) return render_to_response('homepage.html', {'home_form':f,}, context_instance=RequestContext(request)) return render_to_response('invalid_data.html', {'form':form,}, context_instance=RequestContext(request)) else: f = HomeForm(filter_con = '***') return render_to_response('homepage.html', {'home_form':f,}, context_instance=RequestContext(request)) homepage.html <!DOCTYPE HTML> <head> <title>Nivaaran</title> </head> <body> <form method="post" name = 'choose_opt' action=""> {% csrf_token %} Choose your Option1: {{ home_form.option1 }} <br/> Choose your Option2: {{ home_form.option2 }} </form> </body>

    Read the article

  • Adding a child node to a JSON node dynamically

    - by Sai
    I have to create a nested multi level json depending on the resultset that I get from MYSQL. I created a json object initially. Now I want to add child nodes to the already child nodes in the object. d = collections.OrderedDict() jsonobj = {"test": dict(updated_at="today", ID="ID", ads=[])} for rows1 in rs: jsonobj['list']["ads"].append(dict(unit = "1", type ="ad_type", id ="123", updated_at="today", x_id="111", x_name="test")) cur.execute("SELECT * from f_test") rs1 = cur.fetchall() for rows2 in rs1: propertiesObj = [] d["name"]="propName" d["type"]="TypeName" d["value"]="Value1" propertiesObj.append(d) jsonobj['play_list']["ads"].append() Here in the above line I want to add another child node to [play_list].[ads] which is a array list again. the output should look like the following [list].[ads].[preferences].

    Read the article

  • how to convert a binary data into interger?

    - by kaki
    when I am using the wave_read.readframes() I am getting the result in binary data such as /x00/x00/x00:/x16#/x05" etc a very long string when asked for single frame it gives @/x00 or \xe3\xff or so I want this individual frame data in integer how can I convert them into integer to store them into array.

    Read the article

  • How to create and restore a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >