Search Results

Search found 34110 results on 1365 pages for 'gdata python client'.

Page 466/1365 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • Custom header using PHP soap functions

    - by Dees
    Hi, I am having a problem getting a custom soap header to work with PHP5. Can anybody guide me please. What I require is something like this <SOAP-ENV:Header> <USER>myusername</USER> <PASSWORD>mypassword</PASSWORD> </SOAP-ENV:Header> What I get is : <SOAP-ENV:Header> <ns2:null> <USER>myusername</USER> <PASSWORD>mypassword</PASSWORD> </ns2:null> </SOAP-ENV:Header> I would like to remove the namespace tags. The code I use to get this is: class Authstuff { public $USER; public $PASSWORD; public function __construct($user, $pass) { $this->USER = $user; $this->PASSWORD = $pass; } } $auth = new Authstuff('myusername', 'mypassword'); $param = array('Authstuff' => $auth); $authvalues = new SoapVar($auth,SOAP_ENC_OBJECT); $header = new SoapHeader('http://soapinterop.org/echoheader/',"null",$authvalues);

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • How To Create Per-Request Singleton in Pylons?

    - by dave mankoff
    In our Pylons based web-app, we're creating a class that essentially provides some logging functionality. We need a new instance of this class for each http request that comes in, but only one per request. What is the proper way to go about this? Should we just create the object in middleware and store in in request.environ? Is there a more appropriate way to go about this?

    Read the article

  • Implementing __concat__

    - by Casebash
    I tried to implement __concat__, but it didn't work >>> class lHolder(): ... def __init__(self,l): ... self.l=l ... def __concat__(self, l2): ... return self.l+l2 ... def __iter__(self): ... return self.l.__iter__() ... >>> lHolder([1])+[2] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'lHolder' and 'list' How can I fix this?

    Read the article

  • Iterating through nested dictionaries

    - by Framester
    I want to write an iterator for my 'toy' Trie implementation. Adding already works like this: class Trie: def __init__(self): self.root = dict() pass def add(self, string, value): global nops current_dict = self.root for letter in s: nops += 1 current_dict = current_dict.setdefault(letter, {}) current_dict = current_dict.setdefault('value', value) pass The output of the adding looks like that: trie = Trie() trie.add("hello",1) trie.add("world",2) trie.add("worlds",12) print trie.root {'h': {'e': {'l': {'l': {'o': {'value': 1}}}}}, 'w': {'o': {'r': {'l': {'d': {'s': {'value': 2}, 'value': 2}}}}}} I know, that I need a __iter__ and next method. def __iter__(self): self.root.__iter__() pass def next(self): print self.root.next() But AttributeError: 'dict' object has no attribute 'next'. How should I do it? [Update] In the perfect world I would like the output to be one dict with all the words/entries with their corresponding values.

    Read the article

  • HttpError 502 with Google Wave Active Robot API fetch_wavelet()

    - by Drew LeSueur
    I am trying to use the Google Wave Active Robot API fetch_wavelet() and I get an HTTP 502 error example: from waveapi import robot import passwords robot = robot.Robot('gae-run', 'http://images.com/fake-image.jpg') robot.setup_oauth(passwords.CONSUMER_KEY, passwords.CONSUMER_SECRET, server_rpc_base='http://www-opensocial.googleusercontent.com/api/rpc') wavelet = robot.fetch_wavelet('googlewave.com!w+dtuZi6t3C','googlewave.com!conv+root') robot.submit(wavelet) self.response.out.write(wavelet.creator) But the error I get is this: Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 511, in __call__ handler.get(*groups) File "/base/data/home/apps/clstff/gae-run.342467577023864664/main.py", line 23, in get robot.submit(wavelet) File "/base/data/home/apps/clstff/gae-run.342467577023864664/waveapi/robot.py", line 486, in submit res = self.make_rpc(pending) File "/base/data/home/apps/clstff/gae-run.342467577023864664/waveapi/robot.py", line 251, in make_rpc raise IOError('HttpError ' + str(code)) IOError: HttpError 502 Any ideas? Edit: When [email protected] is not a member of the wave I get the correct error message Error: RPC Error500: internalError: [email protected] is not a participant of wave id: [WaveId:googlewave.com!w+Pq1HgvssD] wavelet id: [WaveletId:googlewave.com!conv+root]. Unable to apply operation: {'method':'robot.fetchWave','id':'655720','waveId':'googlewave.com!w+Pq1HgvssD','waveletId':'googlewave.com!conv+root','blipId':'null','parameters':{}} But when [email protected] is a member of the wave I get the http 502 error. IOError: HttpError 502

    Read the article

  • httplib2 giving internal server error 500 with proxy

    - by NJTechie
    Following is the code and error it throws. It works fine without the proxy http = httplib2.Http() . Any pointers are highly appreciated! Usage : http = httplib2.Http(proxy_info = httplib2.ProxyInfo(socks.PROXY_TYPE_HTTP, '74.115.1.11', 80)) main_url = 'http://www.mywebsite.com' response, content = http.request(main_url, 'GET') Error : File "testproxy.py", line 17, in <module> response, content = http.request(main_url, 'GET') File "/home/kk/bin/pythonlib/httplib2/__init__.py", line 1129, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "/home/kk/bin/pythonlib/httplib2/__init__.py", line 901, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/home/kk/bin/pythonlib/httplib2/__init__.py", line 862, in _conn_request conn.request(method, request_uri, body, headers) File "/usr/lib/python2.5/httplib.py", line 866, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.5/httplib.py", line 889, in _send_request self.endheaders() File "/usr/lib/python2.5/httplib.py", line 860, in endheaders self._send_output() File "/usr/lib/python2.5/httplib.py", line 732, in _send_output self.send(msg) File "/usr/lib/python2.5/httplib.py", line 699, in send self.connect() File "/home/kk/bin/pythonlib/httplib2/__init__.py", line 740, in connect self.sock.connect(sa) File "/home/kk/bin/pythonlib/socks.py", line 383, in connect self.__negotiatehttp(destpair[0],destpair[1]) File "/home/kk/bin/pythonlib/socks.py", line 349, in __negotiatehttp raise HTTPError((statuscode,statusline[2])) socks.HTTPError: (500, 'Internal Server Error')

    Read the article

  • Extracting value in Beautifulsoup

    - by Seth
    I have the following code: f = open(path, 'r') html = f.read() # no parameters => reads to eof and returns string soup = BeautifulSoup(html) schoolname = soup.findAll(attrs={'id':'ctl00_ContentPlaceHolder1_SchoolProfileUserControl_SchoolHeaderLabel'}) print schoolname which gives: [<span id="ctl00_ContentPlaceHolder1_SchoolProfileUserControl_SchoolHeaderLabel">A B Paterson College, Arundel, QLD</span>] when I try and access the value (i.e. 'A B Paterson College, Arundel, QLD) by using schoolname['value'] I get the following error: print schoolname['value'] TypeError: list indices must be integers, not str What am I doing wrong to get that value?

    Read the article

  • Problem running a simple EJB application

    - by Spi1988
    I am currently running a simple EJB application using a stateless Session Bean. I am working on NetBeans 6.8 with Personal Glassfish 3.0 and I have installed on my system both the Java EE and the Java SE. I don't know whether it is relevent but I am running Windows7 64-bit version. The Session Bean I implemented has just one method sayHello(); which just prints hello on the screen. When I try to run the application I'm getting the following error: pre-init: init-private: init-userdir: init-user: init-project: do-init: post-init: init-check: init: deps-jar: deps-j2ee-archive: MyEnterprise-app-client.init: MyEnterprise-ejb.init: MyEnterprise-ejb.deps-jar: MyEnterprise-ejb.compile: MyEnterprise-ejb.library-inclusion-in-manifest: MyEnterprise-ejb.dist-ear: MyEnterprise-app-client.deps-jar: MyEnterprise-app-client.compile: MyEnterprise-app-client.library-inclusion-in-manifest: MyEnterprise-app-client.dist-ear: MyEnterprise-ejb.init: MyEnterprise-ejb.deps-jar: MyEnterprise-ejb.compile: MyEnterprise-ejb.library-inclusion-in-manifest: MyEnterprise-ejb.dist-ear: pre-pre-compile: pre-compile: do-compile: post-compile: compile: pre-dist: post-dist: dist-directory-deploy: pre-run-deploy: Starting Personal GlassFish v3 Domain Personal GlassFish v3 Domain is running. Undeploying ... Initializing... Initial deploying MyEnterprise to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist\gfdeploy\MyEnterprise Completed initial distribution of MyEnterprise post-run-deploy: run-deploy: run-display-browser: run-ac: pre-init: init-private: init-userdir: init-user: init-project: do-init: post-init: init-check: init: deps-jar: deps-j2ee-archive: MyEnterprise-app-client.init: MyEnterprise-ejb.init: MyEnterprise-ejb.deps-jar: MyEnterprise-ejb.compile: MyEnterprise-ejb.library-inclusion-in-manifest: MyEnterprise-ejb.dist-ear: MyEnterprise-app-client.deps-jar: MyEnterprise-app-client.compile: MyEnterprise-app-client.library-inclusion-in-manifest: MyEnterprise-app-client.dist-ear: MyEnterprise-ejb.init: MyEnterprise-ejb.deps-jar: MyEnterprise-ejb.compile: MyEnterprise-ejb.library-inclusion-in-manifest: MyEnterprise-ejb.dist-ear: pre-pre-compile: pre-compile: do-compile: post-compile: compile: pre-dist: post-dist: dist-directory-deploy: pre-run-deploy: Undeploying ... Initial deploying MyEnterprise to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist\gfdeploy\MyEnterprise Completed initial distribution of MyEnterprise post-run-deploy: run-deploy: Warning: Could not find file C:\Users\Naqsam\.netbeans\6.8\GlassFish_v3\generated\xml\MyEnterprise\MyEnterpriseClient.jar to copy. Copying 1 file to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist Copying 4 files to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist\MyEnterpriseClient Copying 1 file to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist\MyEnterpriseClient java.lang.NullPointerException at org.glassfish.appclient.client.acc.ACCLogger$1.run(ACCLogger.java:149) at java.security.AccessController.doPrivileged(Native Method) at org.glassfish.appclient.client.acc.ACCLogger.reviseLogger(ACCLogger.java:146) at org.glassfish.appclient.client.acc.ACCLogger.init(ACCLogger.java:93) at org.glassfish.appclient.client.acc.ACCLogger.<init>(ACCLogger.java:80) at org.glassfish.appclient.client.AppClientFacade.createBuilder(AppClientFacade.java:360) at org.glassfish.appclient.client.AppClientFacade.prepareACC(AppClientFacade.java:247) at org.glassfish.appclient.client.acc.agent.AppClientContainerAgent.premain(AppClientContainerAgent.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:323) at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:338) Java Result: 1 run-MyEnterprise-app-client: run: BUILD SUCCESSFUL (total time: 1 minute 59 seconds) see next post.

    Read the article

  • PyGTK/GIO: monitor directory for changes recursively

    - by detly
    Take the following demo code (from the GIO answer to this question), which uses a GIO FileMonitor to monitor a directory for changes: import gio def directory_changed(monitor, file1, file2, evt_type): print "Changed:", file1, file2, evt_type gfile = gio.File(".") monitor = gfile.monitor_directory(gio.FILE_MONITOR_NONE, None) monitor.connect("changed", directory_changed) import glib ml = glib.MainLoop() ml.run() After running this code, I can then create and modify child nodes and be notified of the changes. However, this only works for immediate children (I am aware that the docs don't say otherwise). The last of the following shell commands will not result in a notification: touch one mkdir two touch two/three Is there an easy way to make it recursive? I'd rather not manually code something that looks for directory creation and adds a monitor, removing them on deletion, etc. The intended use is for a VCS file browser extension, to be able to cache the statuses of files in a working copy and update them individually on changes. So there might by anywhere from tens to thousands (or more) directories to monitor. I'd like to just find the root of the working copy and add the file monitor there. I know about pyinotify, but I'm avoiding it so that this works under non-Linux kernels such as FreeBSD or... others. As far as I'm aware, the GIO FileMonitor uses inotify underneath where available, and I can understand not emphasising the implementation to maintain some degree of abstraction, but it suggested to me that it should be possible. (In case it matters, I originally posted this on the PyGTK mailing list.)

    Read the article

  • Non blocking IO call from Django controller from a Windows service

    - by Anders
    Hi all, I have a CherryPy server with a Django application running as a Windows service, inside a controller I need to make a call to wmic, the problem is, so far I have only been able to implement a blocking operation. Does anyone have any recommendation for a non blocking operation so, at least more then one person at a time can access this controller and extract information from wmic? Thanks in advance, Anders

    Read the article

  • [numpy] storing record arrays in object arrays

    - by Peter Prettenhofer
    I'd like to convert a list of record arrays -- dtype is (uint32, float32) -- into a numpy array of dtype np.object: X = np.array(instances, dtype = np.object) where instances is a list of arrays with data type np.dtype([('f0', '<u4'), ('f1', '<f4')]). However, the above statement results in an array whose elements are also of type np.object: X[0] array([(67111L, 1.0), (104242L, 1.0)], dtype=object) Does anybody know why? The following statement should be equivalent to the above but gives the desired result: X = np.empty((len(instances),), dtype = np.object) X[:] = instances X[0] array([(67111L, 1.0), (104242L, 1.0), dtype=[('f0', '<u4'), ('f1', '<f4')]) thanks & best regards, peter

    Read the article

  • Problem trying to achieve a join using the `comments` contrib in Django

    - by NiKo
    Hi, Django rookie here. I have this model, comments are managed with the django_comments contrib: class Fortune(models.Model): author = models.CharField(max_length=45, blank=False) title = models.CharField(max_length=200, blank=False) slug = models.SlugField(_('slug'), db_index=True, max_length=255, unique_for_date='pub_date') content = models.TextField(blank=False) pub_date = models.DateTimeField(_('published date'), db_index=True, default=datetime.now()) votes = models.IntegerField(default=0) comments = generic.GenericRelation( Comment, content_type_field='content_type', object_id_field='object_pk' ) I want to retrieve Fortune objects with a supplementary nb_comments value for each, counting their respectve number of comments ; I try this query: >>> Fortune.objects.annotate(nb_comments=models.Count('comments')) From the shell: >>> from django_fortunes.models import Fortune >>> from django.db.models import Count >>> Fortune.objects.annotate(nb_comments=Count('comments')) [<Fortune: My first fortune, from NiKo>, <Fortune: Another One, from Dude>, <Fortune: A funny one, from NiKo>] >>> from django.db import connection >>> connection.queries.pop() {'time': '0.000', 'sql': u'SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21'} Below is the properly formatted sql query: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 Can you spot the problem? Django won't LEFT JOIN the django_comments table with the content_type data (which contains a reference to the fortune one). This is the kind of query I'd like to be able to generate using the ORM: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") LEFT OUTER JOIN "django_content_type" ON ("django_comments"."content_type_id" = "django_content_type"."id") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 But I don't manage to do it, so help from Django veterans would be much appreciated :) Hint: I'm using Django 1.2-DEV Thanks in advance for your help.

    Read the article

  • Cookies with urllib

    - by CMC
    This will probably seem like a really simple question, and I am quite confused as to why this is so difficult for me. I would like to write a function that takes three inputs: [url, data, cookies] that will use urllib (not urllib2) to get the contents of the requested url. I figured it'd be simple, so I wrote the following: def fetch(url, data = None, cookies = None): if isinstance(data, dict): data = urllib.urlencode(data) if isinstance(cookies, dict): # TODO: find a better way to do this cookies = "; ".join([str(key) + "=" + str(cookies[key]) for key in cookies]) opener = urllib.FancyURLopener() opener.addheader("Cookie", cookies) obj = opener.open(url, data) result = obj.read() obj.close() return result This doesn't work, as far as I can tell (can anyone confirm that?) and I'm stumped.

    Read the article

  • How to get HTTP status message in (py)curl?

    - by mykhal
    spending some time studying pycurl and libcurl documentation, i still can't find a (simple) way, how to get HTTP status message (reason-phrase) in pycurl. status code is easy: import pycurl import cStringIO curl = pycurl.Curl() buff = cStringIO.StringIO() curl.setopt(pycurl.URL, 'http://example.org') curl.setopt(pycurl.WRITEFUNCTION, buff.write) curl.perform() print "status code: %s" % curl.getinfo(pycurl.HTTP_CODE) # -> 200 # print "status message: %s" % ??? # -> "OK"

    Read the article

  • How to chroot Django

    - by Brian M. Hunt
    Can one run Django in a chroot? Notably, what's necessary in order to set up (for example) /var/www as a chroot'd directory and then have Django run in that chroot'd directory? Thank you - I'm grateful for any input.

    Read the article

  • Add data to Django form class using modelformset_factory

    - by dean
    I have a problem where I need to display a lot of forms for detail data for a hierarchical data set. I want to display some relational fields as labels for the forms and I'm struggling with a way to do this in a more robust way. Here is the code... class Category(models.Model): name = models.CharField(max_length=160) class Item(models.Model): category = models.ForeignKey('Category') name = models.CharField(max_length=160) weight = models.IntegerField(default=0) class Meta: ordering = ('category','weight','name') class BudgetValue(models.Model): value = models.IntegerField() plan = models.ForeignKey('Plan') item = models.ForeignKey('Item') I use the modelformset_factory to create a formset of budgetvalue forms for a particular plan. What I'd like is item name and category name for each BudgetValue. When I iterate through the forms each one will be labeled properly. class BudgetValueForm(forms.ModelForm): item = forms.ModelChoiceField(queryset=Item.objects.all(),widget=forms.HiddenInput()) plan = forms.ModelChoiceField(queryset=Plan.objects.all(),widget=forms.HiddenInput()) category = "" < assign dynamically on form creation > item = "" < assign dynamically on form creation > class Meta: model = BudgetValue fields = ('item','plan','value') What I started out with is just creating a dictionary of budgetvalue.item.category.name, budgetvalue.item.name, and the form for each budget value. This gets passed to the template and I render it as I intended. I'm assuming that the ordering of the forms in the formset and the querset used to genererate the formset keep the budgetvalues in the same order and the dictionary is created correctly. That is the budgetvalue.item.name is associated with the correct form. This scares me and I'm thinking there has to be a better way. Any help would be greatly appreciated.

    Read the article

  • Django select max id

    - by pistacchio
    Hi, given a standard model (called Image) with an autoset 'id', how do I get the max id? So far I've tried: max_id = Image.objects.all().aggregate(Max('id')) but I get a 'id__max' Key error. Trying max_id = Image.objects.order_by('id')[0].id gives a 'argument 2 to map() must support iteration' exception Any help?

    Read the article

  • PyQt: Get the position of QGraphicsWidgets in a QGraphicsGridLayout

    - by Chris Phillips
    I have a fairly simple PyQt application in which I'm placing instances of a QGraphicsWidget in a QGraphicsGridLayout and want to connect the widgets with lines drawn with a QGraphicsPath. Unfortunately, no matter what I try, I always get (0, 0) back as the position for both the start and end widgets. I'm constructing the graph with a recursive function that adds widgets to the scene and layout. Once the recursive function is complete, the layout is added to a new widget, which is added to the scene to show everything. The edges are added to the scene as widgets are created. How do I get a non-zero position of any of the widgets in the grid layout?

    Read the article

  • 404 not found in telnet, works fine in browser

    - by Viranch Mehta
    i am having a very irritating problem, when i open a url ( http://celebs.widewallpapers.net/md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg ) in browser, it works fine.. but when i try to access it by telnet on bash, i get 404 not found!! my exact terminal: $ telnet celebs.widewallpapers.net 80 HEAD /md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg HTTP/1.0 [enter] [enter] HTTP/1.1 404 Not Found Server: nginx Date: Sun, 23 May 2010 21:36:05 GMT Content-Type: text/html; charset=windows-1251 Content-Length: 166 Connection: close please help me with this as i m trying to make a C batch-downloader, which is almost working as same as the telnet.

    Read the article

  • Django Haystack exact filtering

    - by blackrobot
    I have a haystack search which has the following SearchIndex: class GrantIndex(indexes.SearchIndex): """ This provides the search index for the Grant application. """ text = indexes.CharField(document=True, use_template=True) year = indexes.IntegerField(model_attr='year__year') date = indexes.DateField(model_attr='date') program = indexes.CharField(model_attr='program__area') grantee = indexes.CharField(model_attr='grantee') amount = indexes.IntegerField(model_attr='amount') site.register(Grant, GrantIndex) If I want to search filtering out any programs that ARE NOT 'Health', I run the following query: from haystack.query import SearchQuerySet sqs = SearchQuerySet() sqs = sqs.filter(program='Health') Unfortunately, this also produces objects from the program 'Health\Other' and 'Health\Cardiovascular'. How do I stop the search from allowing those other programs in? I run Ubuntu 9.10 with Xapian as my search back-end.

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >