Search Results

Search found 14399 results on 576 pages for 'python noob'.

Page 315/576 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Django install on a shared host, .htaccess help

    - by redconservatory
    I am trying to install Django on a shared host using the following instructions: docs.google.com/View?docid=dhhpr5xs_463522g My problem is with the following line on my root .htaccess: RewriteRule ^(.*)$ /cgi-bin/wcgi.py/$1 [QSA,L] When I include this line I get a 500 error with almost all of my domains on this account. My cgi-bin directory is home/my-username/public_html/cgi-bin/ The wcgi.py file contains: #!/usr/local/bin/python import os, sys sys.path.insert(0, "/home/username/django/") sys.path.insert(0, "/home/username/django/projects") sys.path.insert(0, "/home/username/django/projects/newprojects") import django.core.handlers.wsgi os.chdir("/home/username/django/projects/newproject") # optional os.environ['DJANGO_SETTINGS_MODULE'] = "newproject.settings" def runcgi(): environ = dict(os.environ.items()) environ['wsgi.input'] = sys.stdin environ['wsgi.errors'] = sys.stderr environ['wsgi.version'] = (1,0) environ['wsgi.multithread'] = False environ['wsgi.multiprocess'] = True environ['wsgi.run_once'] = True application = django.core.handlers.wsgi.WSGIHandler() if environ.get('HTTPS','off') in ('on','1'): environ['wsgi.url_scheme'] = 'https' else: environ['wsgi.url_scheme'] = 'http' headers_set = [] headers_sent = [] def write(data): if not headers_set: raise AssertionError("write() before start_response()") elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set sys.stdout.write('Status: %s\r\n' % status) for header in response_headers: sys.stdout.write('%s: %s\r\n' % header) sys.stdout.write('\r\n') sys.stdout.write(data) sys.stdout.flush() def start_response(status,response_headers,exc_info=None): if exc_info: try: if headers_sent: # Re-raise original exception if headers sent raise exc_info[0], exc_info[1], exc_info[2] finally: exc_info = None # avoid dangling circular ref elif headers_set: raise AssertionError("Headers already set!") headers_set[:] = [status,response_headers] return write result = application(environ, start_response) try: for data in result: if data: # don't send headers until body appears write(data) if not headers_sent: write('') # send headers now if body was empty finally: if hasattr(result,'close'): result.close() runcgi() Only I changed the "username" to my username...

    Read the article

  • Comparing dicts and update a list of result

    - by lmnt
    Hello, I have a list of dicts and I want to compare each dict in that list with a dict in a resulting list, add it to the result list if it's not there, and if it's there, update a counter associated with that dict. At first I wanted to use the solution described at http://stackoverflow.com/questions/1692388/python-list-of-dict-if-exists-increment-a-dict-value-if-not-append-a-new-dict but I got an error where one dict can not be used as a key to another dict. So the data structure I opted for is a list where each entry is a dict and an int: r = [[{'src': '', 'dst': '', 'cmd': ''}, 0]] The original dataset (that should be compared to the resulting dataset) is a list of dicts: d1 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} d2 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'} d3 = {'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'} d4 = {'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'} o = [d1, d2, d3, d4] The result should be: r = [[{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd1'}, 2], [{'src': '192.168.0.1', 'dst': '192.168.0.2', 'cmd': 'cmd2'}, 1], [{'src': '192.168.0.2', 'dst': '192.168.0.1', 'cmd': 'cmd1'}, 1]] What is the best way to accomplish this? I have a few code examples but none is really good and most is not working correctly. Thanks for any input on this! UPDATE The final code after Tamås comments is: from collections import namedtuple, defaultdict DataClass = namedtuple("DataClass", "src dst cmd") d1 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') d2 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd2') d3 = DataClass(src='192.168.0.2', dst='192.168.0.1', cmd='cmd1') d4 = DataClass(src='192.168.0.1', dst='192.168.0.2', cmd='cmd1') ds = d1, d2, d3, d4 r = defaultdict(int) for d in ds: r[d] += 1 print "list to compare" for d in ds: print d print "result after merge" for k, v in r.iteritems(): print("%s: %s" % (k, v))

    Read the article

  • How do you store accented characters coming from a web service into a database?

    - by Thierry Lam
    I have the following word that I fetch via a web service: André From Python, the value looks like: "Andr\u00c3\u00a9". The input is then decoded using json.loads: >>> import json >>> json.loads('{"name":"Andr\\u00c3\\u00a9"}') >>> {u'name': u'Andr\xc3\xa9'} When I store the above in a utf8 MySQL database, the data is stored like the following using Django: SomeObject.objects.create(name=u'Andr\xc3\xa9') Querying the name column from a mysql shell or displaying it in a web page gives: André The web page displays in utf8: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> My database is configured in utf8: mysql> SHOW VARIABLES LIKE 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_unicode_ci | | collation_server | utf8_unicode_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ 8 rows in set (0.00 sec) How can I retrieve the word André from a web service, store it properly in a database with no data loss and display it on a web page in its original form?

    Read the article

  • How do I generate a connection reset programatically?

    - by Brock Adams
    Hi, I'm sure you've seen the "the connection was reset" message displayed when trying to browse web pages. (The text is from Firefox, other browsers differ.) I need to generate that message/error/condition on demand, to test workarounds. So, how do I generate that condition programmatically? (How to generate a TCP RST from PHP -- or one of the other web-app languages?) Caveats and Conditions: It cannot be a general IP block. The test client must still be able to see the test server when not triggering the condition. Ideally, it would be done at the web-application level (Python, PHP, Coldfusion, Javascript, etc.). Access to routers is problematic. Access to Apache config is a pain. Ideally, it would be triggered by fetching a specific web-page. Bonus if it works on a standard, commercial web host. Update: Sending RST is not enough to cause this condition. See my partial answer, below. I've a solution that works on a local machine, Now need to get it working on a remote host.

    Read the article

  • Given a date range how to calculate the number of weekends partially or wholly within that range?

    - by andybak
    Given a date range how to calculate the number of weekends partially or wholly within that range? (A few definitions as requested: take 'weekend' to mean Saturday and Sunday. The date range is inclusive i.e. the end date is part of the range 'wholly or partially' means that any part of the weekend falling within the date range means the whole weekend is counted.) To simplify I imagine you only actually need to know the duration and what day of the week the initial day is... I darn well now it's going to involve doing integer division by 7 and some logic to add 1 depending on the remainder but I can't quite work out what... extra points for answers in Python ;-) Edit Here's my final code. Weekends are Friday and Saturday (as we are counting nights stayed) and days are 0-indexed starting from Monday. I used onebyone's algorithm and Tom's code layout. Thanks a lot folks. def calc_weekends(start_day, duration): days_until_weekend = [5, 4, 3, 2, 1, 1, 6] adjusted_duration = duration - days_until_weekend[start_day] if adjusted_duration < 0: weekends = 0 else: weekends = (adjusted_duration/7)+1 if start_day == 5 and duration % 7 == 0: #Saturday to Saturday is an exception weekends += 1 return weekends if __name__ == "__main__": days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'] for start_day in range(0,7): for duration in range(1,16): print "%s to %s (%s days): %s weekends" % (days[start_day], days[(start_day+duration) % 7], duration, calc_weekends(start_day, duration)) print

    Read the article

  • how to compare the checksums in a list corresponding to a file path with the file path in the operat

    - by surab
    Hi all, how to compare the checksums in a list corresponding to a file path with the file path in the operating system In Python? import os,sys,libxml2 files=[] sha1s=[] doc = libxml2.parseFile('files.xml') for path in doc.xpathEval('//File/Path'): files.append(path.content) for sha1 in doc.xpathEval('//File/Hash'): sha1s.append(sha1.content) for entry in zip(files,sha1s): print entry the files.xml contains <Files> <File> <Path>usr/share/doc/dialog/samples/form1</Path> <Type>doc</Type> <Size>1222</Size> <Uid>0</Uid> <Gid>0</Gid> <Mode>0755</Mode> <Hash>49744d73e8667d0e353923c0241891d46ebb9032</Hash> </File> <File> <Path>usr/share/doc/dialog/samples/form3</Path> <Type>doc</Type> <Size>1294</Size> <Uid>0</Uid> <Gid>0</Gid> <Mode>0755</Mode> <Hash>f30277f73e468232c59a526baf3a5ce49519b959</Hash> </File> </Files> I need to compare the sha1 checksum in between tags corresponding to the file specified in between the tags, with the same file path in base Operating system.

    Read the article

  • Need a way to determine if a file is done being written to.

    - by Khorkrak
    The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable. Ideally Works on either Linux or Windows - meaning the solution is OS independent. Works for any type of file. Good enough: Works just on windows but can be done through some library or whatever that can be accessed with Python. Works only for PDF files. Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.

    Read the article

  • Extract domain from body of email

    - by iman453
    Hi, I was wondering if there is any way I could extract domain names from the body of email messages in python. I was thinking of using regular expressions, but I am not too great in writing them, and was wondering if someone could help me out. Here's a sample email body: <tr><td colspan="5"><font face="verdana" size="4" color="#999999"><b>Resource Links - </b></font><span class="snv"><a href="http://clk.about.com/?zi=4/RZ">Get Listed Here</a></span></td><td class="snv" valign="bottom" align="right"><a href="http://sprinks.about.com/faq/index.htm">What Is This?</a></td></tr><tr><td colspan="6" bgcolor="#999999"><img height="1" width="1"></td></tr><tr><td colspan="6"><map name="sgmap"><area href="http://x.about.com/sg/r/3412.htm?p=0&amp;ref=fooddrinksl_sg" shape="rect" coords="0, 0, 600, 20"><area href="http://x.about.com/sg/r/3412.htm?p=1&amp;ref=fooddrinksl_sg" shape="rect" coords="0, 55, 600, 75"><area href="http://x.about.com/sg/r/3412.htm?p=2&amp;ref=fooddrinksl_sg" shape="rect" coords="0, 110, 600, 130"></map><img border="0" src="http://z.about.com/sg/sg.gif?cuni=3412" usemap="#sgmap" width="600" height="160"></td></tr><tr><td colspan="6">&nbsp;</td></tr> <tr><td colspan="6"><a name="d"><font face="verdana" size="4" color="#cc0000"><b>Top Picks - </b></font></a><a href="http://slclk.about.com/?zi=1/BAO" class="srvb">Fun Gift Ideas</a><span class="snv"> from your <a href="http://chinesefood.about.com">Chinese Cuisine</a> Guide</span></td></tr><tr><td colspan="6" bgcolor="cc0000"><img height="1" width="1"></td></tr><tr><td colspan="6" class="snv"> So I would need "clk.about.com" etc. Thanks!

    Read the article

  • How can I reorder an mbox file chronologically?

    - by Joshxtothe4
    Hello, I have a single spool mbox file that was created with evolution, containing a selection of emails that I wish to print. My problem is that the emails are not placed into the mbox file chronologically. I would like to know the best way to place order the files from first to last using bash, perl or python. I would like to oder by received for files addressed to me, and sent for files sent by me. Would it perhaps be easier to use maildir files or such? The emails currently exist in the format: From [email protected] Fri Aug 12 09:34:09 2005 Message-ID: <[email protected]> Date: Fri, 12 Aug 2005 09:34:09 +0900 From: me <[email protected]> User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: someone <[email protected]> Subject: Re: (no subject) References: <[email protected]> In-Reply-To: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Status: RO X-Status: X-Keywords: X-UID: 371 X-Evolution-Source: imap://[email protected]/ X-Evolution: 00000002-0010 Hey the actual content of the email someone wrote: > lines of quotedtext I am wondering if there is a way to use this information to easily reorganize the file, perhaps with perl or such.

    Read the article

  • Use Django ORM as standalone [closed]

    - by KeyboardInterrupt
    Possible Duplicates: Use only some parts of Django? Using only the DB part of Django I want to use the Django ORM as standalone. Despite an hour of searching Google, I'm still left with several questions: Does it require me to set up my Python project with a setting.py, /myApp/ directory, and modules.py file? Can I create a new models.py and run syncdb to have it automatically setup the tables and relationships or can I only use models from existing Django projects? There seems to be a lot of questions regarding PYTHONPATH. If you're not calling existing models is this needed? I guess the easiest thing would be for someone to just post a basic template or walkthrough of the process, clarifying the organization of the files e.g.: db/ __init__.py settings.py myScript.py orm/ __init__.py models.py And the basic essentials: # settings.py from django.conf import settings settings.configure( DATABASE_ENGINE = "postgresql_psycopg2", DATABASE_HOST = "localhost", DATABASE_NAME = "dbName", DATABASE_USER = "user", DATABASE_PASSWORD = "pass", DATABASE_PORT = "5432" ) # orm/models.py # ... # myScript.py # import models.. And whether you need to run something like: django-admin.py inspectdb ... (Oh, I'm running Windows if that changes anything regarding command-line arguments.).

    Read the article

  • How to make a increasing numbers after filenames in C?

    - by zaplec
    Hi, I have a little problem. I need to do some little operations on quite many files in one little program. So far I have decided to operate them in a single loop where I just change the number after the name. The files are all named TFxx.txt where xx is increasing number from 1 to 80. So how can I open them all in a single loop one after one? I have tried this: for(i=0; i<=80; i++) { char name[8] = "TF"+i+".txt"; FILE = open(name, r); /* Do something */ } As you can see the second line would be working in python but not in C. I have tried to do similiar running numbering with C to this program, but I haven't found out yet how to do that. The format doesn't need to be as it is on the second line, but I'd like to have some advice of how can I solve this problem. All I need to do is just be able to open many files and do same operations to them.

    Read the article

  • ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN

    - by fish2000
    I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not. It's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published. What is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get. I couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers. should I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.

    Read the article

  • Pairs from single list

    - by Apalala
    Often enough, I've found the need to process a list by pairs. I was wondering which would be the pythonic and efficient way to do it, and found this on Google: pairs = zip(t[::2], t[1::2]) I thought that was pythonic enough, but after a recent discussion involving idioms versus efficiency, I decided to do some tests: import time from itertools import islice, izip def pairs_1(t): return zip(t[::2], t[1::2]) def pairs_2(t): return izip(t[::2], t[1::2]) def pairs_3(t): return izip(islice(t,None,None,2), islice(t,1,None,2)) A = range(10000) B = xrange(len(A)) def pairs_4(t): # ignore value of t! t = B return izip(islice(t,None,None,2), islice(t,1,None,2)) for f in pairs_1, pairs_2, pairs_3, pairs_4: # time the pairing s = time.time() for i in range(1000): p = f(A) t1 = time.time() - s # time using the pairs s = time.time() for i in range(1000): p = f(A) for a, b in p: pass t2 = time.time() - s print t1, t2, t2-t1 These were the results on my computer: 1.48668909073 2.63187503815 1.14518594742 0.105381965637 1.35109519958 1.24571323395 0.00257992744446 1.46182489395 1.45924496651 0.00251388549805 1.70076990128 1.69825601578 If I'm interpreting them correctly, that should mean that the implementation of lists, list indexing, and list slicing in Python is very efficient. It's a result both comforting and unexpected. Is there another, "better" way of traversing a list in pairs? Note that if the list has an odd number of elements then the last one will not be in any of the pairs. Which would be the right way to ensure that all elements are included? I added these two suggestions from the answers to the tests: def pairwise(t): it = iter(t) return izip(it, it) def chunkwise(t, size=2): it = iter(t) return izip(*[it]*size) These are the results: 0.00159502029419 1.25745987892 1.25586485863 0.00222492218018 1.23795199394 1.23572707176 Results so far Most pythonic and very efficient: pairs = izip(t[::2], t[1::2]) Most efficient and very pythonic: pairs = izip(*[iter(t)]*2) It took me a moment to grok that the first answer uses two iterators while the second uses a single one. To deal with sequences with an odd number of elements, the suggestion has been to augment the original sequence adding one element (None) that gets paired with the previous last element, something that can be achieved with itertools.izip_longest().

    Read the article

  • creating matrix with probabilities

    - by John Chan
    Hi all, I want to generate a matrix of NxN to test some code that I have where each row contains floats as the elements and has to add up to 1 (i.e. a row with a set of probabilities). Where it gets tricky is that I want to make sure that randomly some of the elements should be 0 (in fact most of the elements should be 0 except for some random ones to be the probabilities). I need the probabilities to be 1/m where m is the number of elements that are not 0 within a single row. I tried to think of ways to output this, but essentially I would need this stored in a C++ array. So even if I output to a file I would still have the issue of not having it in array as I need it. At the end of it all I need that array because I want to generate a Market Matrix file. I found an implementation in C++ to take an array and convert it to the market matrix file, so this is what I am basing my findings on. My input for the rest of the code takes in this market matrix file so I need that to be the primary form of output. The language does not matter, I just want to generate the file at the end (I found a way mmwrite and mmread in python as well) Please help, I am stuck and not really sure how to implement this.

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • string maniupulations, oops, how do I replace parts of a string

    - by Joe Gibson
    I am very new to python. Could someone explain how I can manipulate a string like this? This function receives three inputs: complete_fmla: has a string with digits and symbols but has no hyphens ('-') nor spaces. partial_fmla: has a combination of hyphens and possibly some digits or symbols, where the digits and symbols that are in it (other than hyphens) are in the same position as in the complete_formula. symbol: one character The output that should be returned is: If the symbol is not in the complete formula, or if the symbol is already in the partial formula, the function should return the same formula as the input partial_formula. If the symbol is in the complete_formula and not in the partial formula, the function should return the partial_formula with the symbol substituting the hyphens in the positions where the symbol is, in all the occurrences of symbol in the complete_formula. For example: generate_next_fmla (‘abcdeeaa’, ‘- - - - - - - - ’, ‘d’) should return ‘- - - d - - - -’ generate_next_fmla (‘abcdeeaa’, ‘- - - d - - - - ’, ‘e’) should return ‘- - - d e e - -’ generate_next_fmla (‘abcdeeaa’, ‘- - - d e e - - ’, ‘a’) should return ‘a - - d e e a a’ Basically, I'm working with the definition: def generate_next_fmla (complete_fmla, partial_fmla, symbol): Do I turn them into lists? and then append? Also, should I find out the index number for the symbol in the complete_fmla so that I know where to append it in the string with hyphens??

    Read the article

  • wsgi django not working

    - by MaKo
    im installing django, the test for wsgi is ok, but when i point my default file to the django test, it doesnt work, this is the test that works fine: default: /etc/apache2/sites-available/default <VirtualHost *:80> ServerName www.example.com ServerAlias example.com ServerAdmin [email protected] DocumentRoot /var/www <Directory /var/www/documents> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /home/ubuntu/djangoProj/micopiloto/application.wsgi <Directory /home/ubuntu/djangoProj/mysitio/wsgi_handler.py> Order allow,deny Allow from all </Directory> </VirtualHost> application.wsgi:: ~/djangoProj/micopiloto import os import sys sys.path.append('/srv/www/cucus/application') os.environ['PYTHON_EGG_CACHE'] = '/srv/www/cucus/.python-egg' def application(environ, start_response): status = '200 OK' output = 'Hello World!MK SS9 tkt kkk' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] but if I change the default to point to application_sa.wsgi the django test, it doesnt work :( application_sa.wsgi import os, sys sys.path.append('/home/ubuntu/djangoProj/micopiloto') os.environ['DJANGO_SETTINGS_MODULE'] = 'micopiloto.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I restart the apache server every time i change the wsgi to test, so what im i missing? thanks a lot!

    Read the article

  • Creating multiple csv files from data within a csv file.

    - by S1syphus
    System OSX or Linux I'm trying to automate my work flow at work, each week I receive an excel file, which I convert to a csv. An example is: ,,L1,,,L2,,,L3,,,L4,,,L5,,,L6,,,L7,,,L8,,,L9,,,L10,,,L11, Title,r/t,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,neede d,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst EXAMPLEfoo,60,6,6,6,0,0,0,0,0,0,6,6,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 EXAMPLEbar,30,6,6,12,6,7,14,6,6,12,6,6,12,6,8,16,6,7,14,6,7.5,15,6,6,12,6,8,16,6,0,0,6,7,14 EXAMPLE1,60,3,3,3,3,5,5,3,4,4,3,3,3,3,6,6,3,4,4,3,3,3,3,4,4,3,8,8,3,0,0,3,4,4 EXAMPLE2,120,6,6,3,0,0,0,6,8,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 EXAMPLE3,60,6,6,6,6,8,8,6,6,6,6,6,6,0,0,0,0,0,0,6,8,8,6,6,6,0,0,0,0,0,0,0,10,10 EXAMPLE4,30,6,6,12,6,7,14,6,6,12,6,6,12,3,5.5,11,6,7.5,15,6,6,12,6,0,0,6,9,18,6,0,0,6,6.5,13 And so you can get a picture of how it looks in excel: What I need to do, is create multiple csv files for each instance in row 1, so L1, L2, L3, L4... And within that each csv file it needs to contain the title, r/t, needed So for L1 an example out put would look like: EXAMPLEfoo,60,6 EXAMPLEbar,30,6 EXAMPLE1,60,3 EXAMPLE2,120,6 EXAMPLE3,60,6 EXAMPLE4,30,6 And for L2: EXAMPLEfoo,60,0 EXAMPLEbar,30,6 EXAMPLE1,60,3 EXAMPLE2,120,0 EXAMPLE3,60,6 EXAMPLE4,30,6 And so on. I have tried playing around with sed and awk and hit google but I have found nothing that really solves the issue. I'd imagine perl would be particular suited to this or maybe python, so I would be more than happy to accept suggestions from users. So, any suggestions? Thanks in advance.

    Read the article

  • Django: DatabaseError column does not exist

    - by Rosarch
    I'm having a problem with Django 1.2.4. Here is a model: class Foo(models.Model): # ... ftw = models.CharField(blank=True) bar = models.ForeignKey(Bar) Right after flushing the database, I use the shell: Python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from apps.foo.models import Foo >>> Foo.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 67, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 82, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 271, in iterator for row in compiler.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 677, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 732, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 15, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) DatabaseError: column foo_foo.bar_id does not exist LINE 1: ...t_omg", "foo_foo"."ftw", "foo_foo... What am I doing wrong here?

    Read the article

  • How to Return Variable for all tests to use Unittest

    - by chrissygormley
    Hello, I have a Python script and I am trying to set a variable so that if the first test fail's the rest of then will be set to fail. The script I have so far is: class Tests(): def function: result function.......... def errorHandle(self): return self.error def sudsPass(self): try: result = self.client.service.GetStreamUri(self.stream, self.token) except suds.WebFault, e: assert False except Exception, e: pass finally: if 'result' in locals(): self.error = True self.errorHandle() assert True else: self.error = False self.errorHandle() assert False def sudsFail(self): try: result = self.client.service.GetStreamUri(self.stream, self.token) except suds.WebFault, e: assert False except Exception, e: pass finally: if 'result' in locals() or self.error == False: assert False else: assert True class GetStreamUri(TestGetStreamUri): def runTest(self): self.sudsPass() class GetStreamUriProtocolFail(TestGetStreamUri): def runTest(self): self.stream.Transport.Protocol = "NoValue" self.errorHandle() self.sudsFail() if __name__ == '__main__': unittest.main() I am trying to get self.error to be set to False if the first test fail. I understand that it is being set in another test but I was hoping someone could help me find a solution to this problem using some other means. Thanks PS. Please ignore the strange tests. There is a problem with the error handling at the moment.

    Read the article

  • Javascript JQUERY AJAX: When Are These Implemented

    - by Michael Moreno
    I'm learning javascript. Poked around this excellent site to gather intel. Keep coming across questions / answers about javascript, JQUERY, JQUERY with AJAX, javascript with JQUERY, AJAX alone. My conclusion: these are all individually powerful and useful. My confusion: how does one determine which/which combination to use ? I've concluded that javascript is readily available on most browsers. For example, I can extend a simple HTML page with <html> <body> <script type="text/javascript"> document.write("Hello World!"); </script> </body> </html> However, within the scope of Python/DJANGO, many of these questions are JQUERY and AJAX related. At which point or under what development circumstances would I conclude that javascript alone isn't going to "cut it", and I need to implement JQUERY and/or AJAX and/or some other permutation ?

    Read the article

  • How to prevent BeautifulSoup from stripping lines

    - by Oli
    I'm trying to translate an online html page into text. I have a problem with this structure: <div align="justify"><b>Available in <a href="http://www.example.com.be/book.php?number=1"> French</a> and <a href="http://www.example.com.be/book.php?number=5"> English</a>. </div> Here is its representation as a python string: '<div align="justify"><b>Available in \r\n<a href="http://www.example.com.be/book.php?number=1">\r\nFrench</a>; \r\n<a href="http://www.example.com.be/book.php?number=5">\r\nEnglish</a>.\r\n</div>' When using: html_content = get_html_div_from_above() para = BeautifulSoup(html_content) txt = para.text BeautifulSoup translate it (in the 'txt' variable) as: u'Available inFrenchandEnglish.' It probably strips each line in the original html string. Do you have a clean solution about this problem ? Thanks.

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • Fetching just the Key/id from a ReferenceProperty in App Engine

    - by ozone
    Hi SO, I could use a little help in AppEngine land... Using the [Python] API I create relationships like this example from the docs: class Author(db.Model): name = db.StringProperty() class Story(db.Model): author = db.ReferenceProperty(Author) story = db.get(story_key) author_name = story.author.name As I understand it, that example will make two datastore queries. One to fetch the Story and then one to deference the Author inorder to access the name. But I want to be able to fetch the id, so do something like: story = db.get(story_key) author_id = story.author.key().id() I want to just get the id from the reference. I do not want to have to deference (therefore query the datastore) the ReferenceProperty value. From reading the documentation it says that the value of a ReferenceProperty is a Key Which leads me to think that I could just call .id() on the reference's value. But it also says: The ReferenceProperty model provides features for Key property values such as automatic dereferencing. I can't find anything that explains when this referencing takes place? Is it safe to call .id() on the ReferenceProperty's value? Can it be assumed that calling .id() will not cause a datastore lookup?

    Read the article

  • Installing django on dreamhost (help a newb out)

    - by augustfirst
    I'm trying to get django running on my dreahost account. I've been trying to sort of use two tutorials at once: the one on the dreamhost wiki and the one in the django book. I installed django using the script on the wiki page, but I ran into trouble immediately while trying to work through the django book. It says: To start the server, change into your project directory (cd mysite), if you haven’t already, and run this command: python manage.py runserver This launches the server locally, on port 8000, accessible only to connections from your own computer. Now that it’s running, visit 127.0.0.1:8000 with your Web browser. You’ll see a “Welcome to Django” page shaded in a pleasant pastel blue. It worked! Those instructions seem to assume that you're developing locally, not on a shared server. Where the heck am I supposed to look for the "Welcome to Django" page after starting the server? In my webroot? No dice. Anyway, I tried to blunder ahead through the django book to its hello world tutorial (chapter 3). But once I've edited the view file and the URLconf, I don't get a nice clean "hello world" text. Instead (as you can see) I get an "import error". Any help would be greatly appreciated.

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >