Search Results

Search found 16680 results on 668 pages for 'python datetime'.

Page 362/668 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • subprocess.Popen doesn't work when args is sequence

    - by pero
    I'm having a problem with subprocess.Popen when args parameter is given as sequence. For example: import subprocess maildir = "/home/support/Maildir" This works (it prints the correct size of /home/support/Maildir dir): size = subprocess.Popen(["du -s -b " + maildir], shell=True, stdout=subprocess.PIPE).communicate()[0].split()[0] print size But, this doesn't work (try it): size = subprocess.Popen(["du", "-s -b", maildir], shell=True, stdout=subprocess.PIPE).communicate()[0].split()[0] print size What's wrong?

    Read the article

  • Django syncdb error

    - by Hulk
    /mysite/project4 class notes(models.Model): created_by = models.ForeignKey(User) detail = models.ForeignKey(Details) Details and User are in the same module i.e,/mysite/project1 In project1 models i have defined class User(): ...... class Details(): ...... When DB i synced there is an error saying Error: One or more models did not validate: project4: Accessor for field 'detail' clashes with related field . Add a related_name argument to the definition for 'detail'. How can this be solved.. thanks..

    Read the article

  • how to import the blog.py(i import the 'blog' folder)

    - by zjm1126
    my dir location,i am in a.py: my_Project |----blog |-----__init__.py |-----a.py |-----blog.py when i 'from blog import something' in a.py , it show error: from blog import BaseRequestHandler ImportError: cannot import name BaseRequestHandler i think it import the blog folder,not the blog.py so how to import the blog.py

    Read the article

  • Help calling def from class.

    - by wtzolt
    Hello, Noob question... class msgbox: def __init__(self, lbl_msg = '', dlg_title = ''): self.wTree = gtk.glade.XML('msgbox.glade') self.wTree.get_widget('dialog1').set_title(dlg_title) self.wTree.get_widget('label1').set_text(lbl_msg) self.wTree.signal_autoconnect( {'on_okbutton1_clicked':self.done} ) def done(self,w): self.wTree.get_widget('dialog1').destroy() class Fun(object): wTree = None def __init__(self): self.wTree = gtk.glade.XML( "main.glade" ) self.wTree.signal_autoconnect( {'on_buttonOne' : self.one,} ) gtk.main() @yieldsleep def one(self, widget, data=None): self.msg = msgbox('Please wait...','') yield 500 self.msg = msgbox().done() # <----------------??? self.msg = msgbox('Done!','') With this i get an error: messageBox().done() TypeError: done() takes exactly 2 arguments (1 given) How can i make the dialog box with "please wait" to close before the second dialog box with "done" appears?? Thank you.

    Read the article

  • Django JSON serializable error

    - by Hulk
    With the following code below, There is an error saying File "/home/user/web_pro/info/views.py", line 184, in headerview, raise TypeError("%r is not JSON serializable" % (o,)) TypeError: <lastname: jerry> is not JSON serializable In the models code header(models.Model): firstname = models.ForeignKey(Firstname) lastname = models.ForeignKey(Lastname) In the views code headerview(request): header = header.objects.filter(created_by=my_id).order_by(order_by)[offset:limit] l_array = [] l_array_obj = [] for obj in header: l_array_obj = [obj.title, obj.lastname ,obj.firstname ] l_array.append(l_array_obj) dictionary_l.update({'Data': l_array}) ; return HttpResponse(simplejson.dumps(dictionary_l), mimetype='application/javascript') what is this error and how to resolve this? thanks..

    Read the article

  • Node.js as a custom (streaming) upload handler for Django

    - by Gijs
    I want to build an upload-centric app using Django. One way to do this is with nginx's upload module (nonblocking) but it has its problems. Node.js is supposed to be a good candidate for this type of application. But how can I make node.js act as an upload_handler() for Django (http://docs.djangoproject.com/en/1.1/topics/http/file-uploads/#modifying-upload-handlers-on-the-fly) I'm not sure where to look for examples?

    Read the article

  • queries in django

    - by Hulk
    How to query Employee to get all the address related to the employee, Employee.Add.all() doe not work.. class Employee(): Add = models.ManyToManyField(Address) parent = models.ManyToManyField(Parent, blank=True, null=True) class Address(models.Model): address_emp = models.CharField(max_length=512) description = models.TextField() def __unicode__(self): return self.name()

    Read the article

  • Eventlet client-server

    - by johannix
    I'm trying to setup a server-client interraction: Server gets a string from client Server returns data to client I got this working with socket.send and socket.recv, but was wondering how to get it working with files through socket.makefile. I've got most of the pieces working, but don't know how to force each of the pieces to read at the appropriate time. So what it looks like is happening is that both the client and server are reading on the socket at the same time. Any ideas on how to let the server complete the original read before the client starts waiting for its information? Below are the core pieces that reproduce the read lock. Server: def handle(green_socket): file_handler = green_socket.makefile('rw') print 'before server read' lines = file_handler.readlines() print "readlines: %s" % (lines) print '** Server started **' server = eventlet.listen((HOST, PORT)) pool = eventlet.GreenPool() while True: new_sock, address = server.accept() pool.spawn_n(handle, new_sock) Client: from eventlet.green import socket green_socket = eventlet.connect((HOST, PORT)) file_handler = green_socket.makefile('rw') print 'before write' file_handler.writelines('client message') file_handler.flush() print 'before client read' lines = file_handler.readlines()

    Read the article

  • How to implement full text search in Django?

    - by Jannis
    I would like to implement a search function in a django blogging application. The status quo is that I have a list of strings supplied by the user and the queryset is narrowed down by each string to include only those objects that match the string. See: if request.method == "POST": form = SearchForm(request.POST) if form.is_valid(): posts = Post.objects.all() for string in form.cleaned_data['query'].split(): posts = posts.filter( Q(title__icontains=string) | Q(text__icontains=string) | Q(tags__name__exact=string) ) return archive_index(request, queryset=posts, date_field='date') Now, what if I didn't want do concatenate each word that is searched for by a logical AND but with a logical OR? How would I do that? Is there a way to do that with Django's own Queryset methods or does one have to fall back to raw SQL queries? In general, is it a proper solution to do full text search like this or would you recommend using a search engine like Solr, Whoosh or Xapian. What are there benefits? Thanks for taking the time

    Read the article

  • Django exclude(**kwargs) help

    - by shawnjan
    Hey guys/gals! I had a question for you, something that I can't seem to find the solution for... Basically, I have a model called Environment, and I am passing all of them to a view, and there are particular environments that I would like to exclude. Now, I know there is a exclude function on a queryset, but I can't seem to figure out how to use it for multiple options... For example, I tried this but it didn't work: kwargs = {"name": "env1", "name": "env2"} envs = Environment.objects.exclude( kwards ) But the only thing that it will exclude is the last "name" value in the list of kwargs. I understand why it does that now, but I still can't seem to exclude multiple objects with one command. Any help is much appreciated! Shawn

    Read the article

  • vectorizing a for loop in numpy/scipy?

    - by user248237
    I'm trying to vectorize a for loop that I have inside of a class method. The for loop has the following form: it iterates through a bunch of points and depending on whether a certain variable (called "self.condition_met" below) is true, calls a pair of functions on the point, and adds the result to a list. Each point here is an element in a vector of lists, i.e. a data structure that looks like array([[1,2,3], [4,5,6], ...]). Here is the problematic function: def myClass: def my_inefficient_method(self): final_vector = [] # Assume 'my_vector' and 'my_other_vector' are defined numpy arrays for point in all_points: if not self.condition_met: a = self.my_func1(point, my_vector) b = self.my_func2(point, my_other_vector) else: a = self.my_func3(point, my_vector) b = self.my_func4(point, my_other_vector) c = a + b final_vector.append(c) # Choose random element from resulting vector 'final_vector' self.condition_met is set before my_inefficient_method is called, so it seems unnecessary to check it each time, but I am not sure how to better write this. Since there are no destructive operations here it is seems like I could rewrite this entire thing as a vectorized operation -- is that possible? any ideas how to do this?

    Read the article

  • Why is my implementation of the Sieve of Atkin overlooking numbers close to the specified limit?

    - by Ross G
    My implementation either overlooks primes near the limit or composites near the limit. while some limits work and others don't. I'm am completely confused as to what is wrong. def AtkinSieve (limit): results = [2,3,5] sieve = [False]*limit factor = int(math.sqrt(lim)) for i in range(1,factor): for j in range(1, factor): n = 4*i**2+j**2 if (n <= lim) and (n % 12 == 1 or n % 12 == 5): sieve[n] = not sieve[n] n = 3*i**2+j**2 if (n <= lim) and (n % 12 == 7): sieve[n] = not sieve[n] if i>j: n = 3*i**2-j**2 if (n <= lim) and (n % 12 == 11): sieve[n] = not sieve[n] for index in range(5,factor): if sieve[index]: for jndex in range(index**2, limit, index**2): sieve[jndex] = False for index in range(7,limit): if sieve[index]: results.append(index) return results For example, when I generate a primes to the limit of 1000, the Atkin sieve misses the prime 997, but includes the composite 965. But if I generate up the limit of 5000, the list it returns is completely correct.

    Read the article

  • Why is my implementation of the Sieve of Atkin overlooking numbers close to the specified limit?

    - by Ross G
    My implementation either overlooks primes near the limit or composites near the limit. while some limits work and others don't. I'm am completely confused as to what is wrong. def AtkinSieve (limit): results = [2,3,5] sieve = [False]*limit factor = int(math.sqrt(lim)) for i in range(1,factor): for j in range(1, factor): n = 4*i**2+j**2 if (n <= lim) and (n % 12 == 1 or n % 12 == 5): sieve[n] = not sieve[n] n = 3*i**2+j**2 if (n <= lim) and (n % 12 == 7): sieve[n] = not sieve[n] if i>j: n = 3*i**2-j**2 if (n <= lim) and (n % 12 == 11): sieve[n] = not sieve[n] for index in range(5,factor): if sieve[index]: for jndex in range(index**2, limit, index**2): sieve[jndex] = False for index in range(7,limit): if sieve[index]: results.append(index) return results For example, when I generate a primes to the limit of 1000, the Atkin sieve misses the prime 997, but includes the composite 965. But if I generate up the limit of 5000, the list it returns is completely correct.

    Read the article

  • Copying contents of a model

    - by Hulk
    If there exists an old data of a model say , query=Emp.objects.filter(pk=profile.id) Is there a easier way to copy the same values into the same model again.. Now that the id will be different so.. I have this requirement. Thanks..

    Read the article

  • How do I use m2crypto to validate a X509 certificate chain in a non-SSL setting

    - by Brock Pytlik
    I'm trying to figure out how to, using m2crypto, validate the chain of trust from a public key version of a X509 certificate back to one of a set of known root CA's when the chain may be arbitrarily long. The SSL.Context module looks promising except that I'm not doing this in the context of a SSL connection and I can't see how the information passed to load_verify_locations is used. Essentially, I'm looking for the interface that's equivalent to: openssl verify pub_key_x509_cert Is there something like that in m2crypto? Thanks.

    Read the article

  • Deploying Pylons with Nginx reverse proxy?

    - by resopollution
    Is there a tutorial on how to deploy Pylons with Nginx? I've been able to start nginx and then serve pylons to :8080 with paster serve development.ini However, I can't seem to do other stuff as pylons locks me into that serve mode. If I try to CTRL+Z out of pylons serving to do other stuff on my server, pylons goes down. There must be a different method of deployment. PS - I've done all this: http://wiki.pylonshq.com/display/pylonscookbook/Running+Pylons+with+NGINX?showComments=true#comments I just have no clue what to do with the Pylons app other than paster serve. Not sure if tehre is a different method.

    Read the article

  • Django url parameters

    - by Hulk
    How to pass two paramters in urls in django <script> url=/toolbox/display/" + id + "2"; window.location=url; </script> Also how is this handeled in urls.py (r'^display/(?P<rid>\d+)/(?P<param>\d+)/$', 'table_display'), In views, def table_display(request,rid,param): print param //This should print 2

    Read the article

  • Undefined variable from import when using wxPython in pydev

    - by Bibendum
    I just downloaded wxPython, and was running some of the sample programs from here. However, on every line that uses a variable from wx.*, I get a "Undefined variable from import error" For example, the following program generates five errors on lines 1,4,8, and two on line 5: import wx class MyFrame(wx.Frame): """ We simply derive a new class of Frame. """ def __init__(self, parent, title): wx.Frame.__init__(self, parent, title=title, size=(200,100)) self.control = wx.TextCtrl(self, style=wx.TE_MULTILINE) self.Show(True) app = wx.App(False) frame = MyFrame(None, 'Small editor') app.MainLoop() The program, however, compiles and runs perfectly. I haven't made any significant modifications to pydev or eclipse, and the wxPython install is fresh.

    Read the article

  • Problem with hash function: hash(1) == hash(1.0)

    - by mtasic
    I have an instance of dict with ints, floats, strings as keys, but the problem is when there are a as int and b as float, and float(a) == b, then their hash values are the same, and thats what I do NOT want to get because I need unique hash vales for this cases in order to get corresponding values. Example: d = {1:'1', 1.0:'1.0', '1':1, '1.0':1.0} d[1] == '1.0' d[1.0] == '1.0' d['1'] == 1 d['1.0'] == 1.0 What I need is: d = {1:'1', 1.0:'1.0', '1':1, '1.0':1.0} d[1] == '1' d[1.0] == '1.0' d['1'] == 1 d['1.0'] == 1.0

    Read the article

  • How to digitally sign a message with M2Crypto using the keys within a DER format certificate

    - by Pablo Santos
    Hi everyone. I am working on a project to implement digital signatures of outgoing messages and decided to use M2Crypto for that. I have a certificate (in DER format) from which I extract the keys to sign the message. For some reason I keep getting an ugly segmentation fault error when I call the "sign_update" method. Given the previous examples I have read here, I am clearly missing something. Here is the example I am working on: from M2Crypto.X509 import * cert = load_cert( 'certificate.cer', format=0 ) Pub_key = cert.get_pubkey() Pub_key.reset_context(md='sha1') Pub_key.sign_init() Pub_key.sign_update( "This should be good." ) print Pub_key.sign_final() Thanks in advance for the help, Pablo

    Read the article

  • Cache the result of a MySQLdb database query in memory

    - by ensnare
    Our application fetches the correct database server from a pool of database servers. So each query is really 2 queries, and they look like this: Fetch the correct DB server Execute the query We do this so we can take DB servers online and offline as necessary, as well as for load-balancing. But the first query seems like it could be cached to memory, so it only actually queries the database every 5 or 10 minutes or so. What's the best way to do this? Thanks.

    Read the article

  • Django QuerySet ordering by number of reverse ForeignKey matches

    - by msanders
    I have the following Django models: class Foo(models.Model): title = models.CharField(_(u'Title'), max_length=600) class Bar(models.Model): foo = models.ForeignKey(Foo) eg_id = models.PositiveIntegerField(_(u'Example ID'), default=0) I wish to return a list of Foo objects which have a reverse relationship with Bar objects that have a eg_id value contained in a list of values. So I have: id_list = [7, 8, 9, 10] qs = Foo.objects.filter(bar__eg_id__in=id_list) How do I order the matching Foo objects according to the number of related Bar objects which have an eg_id value in the id_list?

    Read the article

  • Django urls on json request

    - by Hulk
    When making a django request through json as, var info=id + "##" +name+"##" $.post("/supervise/activity/" + info ,[] , function Handler(data,arr) { } In urls.py (r'^activity/(?P<info>\d+)/$, 'activity'), In views, def activity(request,info): print info The request does not go through.info is a string.How can this be resolved Thanks..

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >