Search Results

Search found 15380 results on 616 pages for 'man with python'.

Page 316/616 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Using Sphinx with a distutils-built C extension

    - by detly
    I have written a Python module including a submodule written in C: the module itself is called foo and the C part is foo._bar. The structure looks like: src/ foo/__init__.py <- contains the public stuff foo/_bar/bar.c <- the C extension doc/ <- Sphinx configuration conf.py ... foo/__init__.py imports _bar to augment it, and the useful stuff is exposed in the foo module. This works fine when it's built, but obviously won't work in uncompiled form, since _bar doesn't exist until it's built. I'd like to use Sphinx to document the project, and use the autodoc extension on the foo module. This means I need to build the project before I can build the documentation. Since I build with distutils, the built module ends up in some variably named dir build/lib.linux-ARCH-PYVERSION — which means I can't just hard-code the directory into a Sphinx' conf.py. So how do I configure my distutils setup.py script to run the Sphinx builder over the built module? For completeness, here's the call to setup (the 'fake' things are custom builders that subclass build and build_ext): setup(cmdclass = { 'fake': fake, 'build_ext_fake' : build_ext_fake }, package_dir = {'': 'src'}, packages = ['foo'], name = 'foo', version = '0.1', description = desc, ext_modules = [module_real])

    Read the article

  • creating matrix with probabilities

    - by John Chan
    Hi all, I want to generate a matrix of NxN to test some code that I have where each row contains floats as the elements and has to add up to 1 (i.e. a row with a set of probabilities). Where it gets tricky is that I want to make sure that randomly some of the elements should be 0 (in fact most of the elements should be 0 except for some random ones to be the probabilities). I need the probabilities to be 1/m where m is the number of elements that are not 0 within a single row. I tried to think of ways to output this, but essentially I would need this stored in a C++ array. So even if I output to a file I would still have the issue of not having it in array as I need it. At the end of it all I need that array because I want to generate a Market Matrix file. I found an implementation in C++ to take an array and convert it to the market matrix file, so this is what I am basing my findings on. My input for the rest of the code takes in this market matrix file so I need that to be the primary form of output. The language does not matter, I just want to generate the file at the end (I found a way mmwrite and mmread in python as well) Please help, I am stuck and not really sure how to implement this.

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • Pairs from single list

    - by Apalala
    Often enough, I've found the need to process a list by pairs. I was wondering which would be the pythonic and efficient way to do it, and found this on Google: pairs = zip(t[::2], t[1::2]) I thought that was pythonic enough, but after a recent discussion involving idioms versus efficiency, I decided to do some tests: import time from itertools import islice, izip def pairs_1(t): return zip(t[::2], t[1::2]) def pairs_2(t): return izip(t[::2], t[1::2]) def pairs_3(t): return izip(islice(t,None,None,2), islice(t,1,None,2)) A = range(10000) B = xrange(len(A)) def pairs_4(t): # ignore value of t! t = B return izip(islice(t,None,None,2), islice(t,1,None,2)) for f in pairs_1, pairs_2, pairs_3, pairs_4: # time the pairing s = time.time() for i in range(1000): p = f(A) t1 = time.time() - s # time using the pairs s = time.time() for i in range(1000): p = f(A) for a, b in p: pass t2 = time.time() - s print t1, t2, t2-t1 These were the results on my computer: 1.48668909073 2.63187503815 1.14518594742 0.105381965637 1.35109519958 1.24571323395 0.00257992744446 1.46182489395 1.45924496651 0.00251388549805 1.70076990128 1.69825601578 If I'm interpreting them correctly, that should mean that the implementation of lists, list indexing, and list slicing in Python is very efficient. It's a result both comforting and unexpected. Is there another, "better" way of traversing a list in pairs? Note that if the list has an odd number of elements then the last one will not be in any of the pairs. Which would be the right way to ensure that all elements are included? I added these two suggestions from the answers to the tests: def pairwise(t): it = iter(t) return izip(it, it) def chunkwise(t, size=2): it = iter(t) return izip(*[it]*size) These are the results: 0.00159502029419 1.25745987892 1.25586485863 0.00222492218018 1.23795199394 1.23572707176 Results so far Most pythonic and very efficient: pairs = izip(t[::2], t[1::2]) Most efficient and very pythonic: pairs = izip(*[iter(t)]*2) It took me a moment to grok that the first answer uses two iterators while the second uses a single one. To deal with sequences with an odd number of elements, the suggestion has been to augment the original sequence adding one element (None) that gets paired with the previous last element, something that can be achieved with itertools.izip_longest().

    Read the article

  • Scope of "library" methods

    - by JS
    Hello, I'm apparently laboring under a poor understanding of Python scoping. Perhaps you can help. Background: I'm using the 'if name in "main"' construct to perform "self-tests" in my module(s). Each self test makes calls to the various public methods and prints their results for visual checking as I develop the modules. To keep things "purdy" and manageable, I've created a small method to simplify the testing of method calls: def pprint_vars(var_in): print("%s = '%s'" % (var_in, eval(var_in))) Calling pprint_vars with: pprint_vars('some_variable_name') prints: some_variable_name = 'foo' All fine and good. Problem statement: Not happy to just KISS, I had the brain-drizzle to move my handy-dandy 'pprint_vars' method into a separate file named 'debug_tools.py' and simply import 'debug_tools' whenever I wanted access to 'pprint_vars'. Here's where things fall apart. I would expect import debug_tools foo = bar debug_tools.pprint_vars('foo') to continue working its magic and print: foo = 'bar' Instead, it greets me with: NameError: name 'some_var' is not defined Irrational belief: I believed (apparently mistakenly) that import puts imported methods (more or less) "inline" with the code, and thus the variable scoping rules would remain similar to if the method were defined inline. Plea for help: Can someone please correct my (mis)understanding of scoping regards imports? Thanks, JS

    Read the article

  • How do I generate a connection reset programatically?

    - by Brock Adams
    Hi, I'm sure you've seen the "the connection was reset" message displayed when trying to browse web pages. (The text is from Firefox, other browsers differ.) I need to generate that message/error/condition on demand, to test workarounds. So, how do I generate that condition programmatically? (How to generate a TCP RST from PHP -- or one of the other web-app languages?) Caveats and Conditions: It cannot be a general IP block. The test client must still be able to see the test server when not triggering the condition. Ideally, it would be done at the web-application level (Python, PHP, Coldfusion, Javascript, etc.). Access to routers is problematic. Access to Apache config is a pain. Ideally, it would be triggered by fetching a specific web-page. Bonus if it works on a standard, commercial web host. Update: Sending RST is not enough to cause this condition. See my partial answer, below. I've a solution that works on a local machine, Now need to get it working on a remote host.

    Read the article

  • SEO google keyword position tools?

    - by Peterl86
    Hi guys, I want to check our google postions for several keywords every day and make a note in a spreadseet. At the moment, we have a student doing it but it's a rubbish job and it doesn't seem fair on them! Are there any tools available to automate this process? I have tried rankchecker by seobook.com, but although that should be exactly what im looking for when i set scheduled tasks in that, it doesnt work. Any tips would be appreciated, thanks! peter EDIT: Liam has suggested a Python script to do this, which unfortunately isnt something I'm very familiar with! If anyone knows of a good tutorial or something to help us with this, that would be brilliant. Update: Found a php script at seoscript.net which looks like a step in the right direction. But I cant get it to work! I get this error. Anyone more knowledgabe than me know how to fix that? I have PEAR installed. thanks again, Peter

    Read the article

  • How to make a increasing numbers after filenames in C?

    - by zaplec
    Hi, I have a little problem. I need to do some little operations on quite many files in one little program. So far I have decided to operate them in a single loop where I just change the number after the name. The files are all named TFxx.txt where xx is increasing number from 1 to 80. So how can I open them all in a single loop one after one? I have tried this: for(i=0; i<=80; i++) { char name[8] = "TF"+i+".txt"; FILE = open(name, r); /* Do something */ } As you can see the second line would be working in python but not in C. I have tried to do similiar running numbering with C to this program, but I haven't found out yet how to do that. The format doesn't need to be as it is on the second line, but I'd like to have some advice of how can I solve this problem. All I need to do is just be able to open many files and do same operations to them.

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • string maniupulations, oops, how do I replace parts of a string

    - by Joe Gibson
    I am very new to python. Could someone explain how I can manipulate a string like this? This function receives three inputs: complete_fmla: has a string with digits and symbols but has no hyphens ('-') nor spaces. partial_fmla: has a combination of hyphens and possibly some digits or symbols, where the digits and symbols that are in it (other than hyphens) are in the same position as in the complete_formula. symbol: one character The output that should be returned is: If the symbol is not in the complete formula, or if the symbol is already in the partial formula, the function should return the same formula as the input partial_formula. If the symbol is in the complete_formula and not in the partial formula, the function should return the partial_formula with the symbol substituting the hyphens in the positions where the symbol is, in all the occurrences of symbol in the complete_formula. For example: generate_next_fmla (‘abcdeeaa’, ‘- - - - - - - - ’, ‘d’) should return ‘- - - d - - - -’ generate_next_fmla (‘abcdeeaa’, ‘- - - d - - - - ’, ‘e’) should return ‘- - - d e e - -’ generate_next_fmla (‘abcdeeaa’, ‘- - - d e e - - ’, ‘a’) should return ‘a - - d e e a a’ Basically, I'm working with the definition: def generate_next_fmla (complete_fmla, partial_fmla, symbol): Do I turn them into lists? and then append? Also, should I find out the index number for the symbol in the complete_fmla so that I know where to append it in the string with hyphens??

    Read the article

  • What category of combinatorial problems appear on the logic games section of the LSAT?

    - by Merjit
    There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF = AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome.

    Read the article

  • wsgi django not working

    - by MaKo
    im installing django, the test for wsgi is ok, but when i point my default file to the django test, it doesnt work, this is the test that works fine: default: /etc/apache2/sites-available/default <VirtualHost *:80> ServerName www.example.com ServerAlias example.com ServerAdmin [email protected] DocumentRoot /var/www <Directory /var/www/documents> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /home/ubuntu/djangoProj/micopiloto/application.wsgi <Directory /home/ubuntu/djangoProj/mysitio/wsgi_handler.py> Order allow,deny Allow from all </Directory> </VirtualHost> application.wsgi:: ~/djangoProj/micopiloto import os import sys sys.path.append('/srv/www/cucus/application') os.environ['PYTHON_EGG_CACHE'] = '/srv/www/cucus/.python-egg' def application(environ, start_response): status = '200 OK' output = 'Hello World!MK SS9 tkt kkk' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] but if I change the default to point to application_sa.wsgi the django test, it doesnt work :( application_sa.wsgi import os, sys sys.path.append('/home/ubuntu/djangoProj/micopiloto') os.environ['DJANGO_SETTINGS_MODULE'] = 'micopiloto.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I restart the apache server every time i change the wsgi to test, so what im i missing? thanks a lot!

    Read the article

  • How to Return Variable for all tests to use Unittest

    - by chrissygormley
    Hello, I have a Python script and I am trying to set a variable so that if the first test fail's the rest of then will be set to fail. The script I have so far is: class Tests(): def function: result function.......... def errorHandle(self): return self.error def sudsPass(self): try: result = self.client.service.GetStreamUri(self.stream, self.token) except suds.WebFault, e: assert False except Exception, e: pass finally: if 'result' in locals(): self.error = True self.errorHandle() assert True else: self.error = False self.errorHandle() assert False def sudsFail(self): try: result = self.client.service.GetStreamUri(self.stream, self.token) except suds.WebFault, e: assert False except Exception, e: pass finally: if 'result' in locals() or self.error == False: assert False else: assert True class GetStreamUri(TestGetStreamUri): def runTest(self): self.sudsPass() class GetStreamUriProtocolFail(TestGetStreamUri): def runTest(self): self.stream.Transport.Protocol = "NoValue" self.errorHandle() self.sudsFail() if __name__ == '__main__': unittest.main() I am trying to get self.error to be set to False if the first test fail. I understand that it is being set in another test but I was hoping someone could help me find a solution to this problem using some other means. Thanks PS. Please ignore the strange tests. There is a problem with the error handling at the moment.

    Read the article

  • Creating multiple csv files from data within a csv file.

    - by S1syphus
    System OSX or Linux I'm trying to automate my work flow at work, each week I receive an excel file, which I convert to a csv. An example is: ,,L1,,,L2,,,L3,,,L4,,,L5,,,L6,,,L7,,,L8,,,L9,,,L10,,,L11, Title,r/t,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,neede d,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst,needed,actual,Inst EXAMPLEfoo,60,6,6,6,0,0,0,0,0,0,6,6,6,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 EXAMPLEbar,30,6,6,12,6,7,14,6,6,12,6,6,12,6,8,16,6,7,14,6,7.5,15,6,6,12,6,8,16,6,0,0,6,7,14 EXAMPLE1,60,3,3,3,3,5,5,3,4,4,3,3,3,3,6,6,3,4,4,3,3,3,3,4,4,3,8,8,3,0,0,3,4,4 EXAMPLE2,120,6,6,3,0,0,0,6,8,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 EXAMPLE3,60,6,6,6,6,8,8,6,6,6,6,6,6,0,0,0,0,0,0,6,8,8,6,6,6,0,0,0,0,0,0,0,10,10 EXAMPLE4,30,6,6,12,6,7,14,6,6,12,6,6,12,3,5.5,11,6,7.5,15,6,6,12,6,0,0,6,9,18,6,0,0,6,6.5,13 And so you can get a picture of how it looks in excel: What I need to do, is create multiple csv files for each instance in row 1, so L1, L2, L3, L4... And within that each csv file it needs to contain the title, r/t, needed So for L1 an example out put would look like: EXAMPLEfoo,60,6 EXAMPLEbar,30,6 EXAMPLE1,60,3 EXAMPLE2,120,6 EXAMPLE3,60,6 EXAMPLE4,30,6 And for L2: EXAMPLEfoo,60,0 EXAMPLEbar,30,6 EXAMPLE1,60,3 EXAMPLE2,120,0 EXAMPLE3,60,6 EXAMPLE4,30,6 And so on. I have tried playing around with sed and awk and hit google but I have found nothing that really solves the issue. I'd imagine perl would be particular suited to this or maybe python, so I would be more than happy to accept suggestions from users. So, any suggestions? Thanks in advance.

    Read the article

  • UnicodeEncodeError: 'ascii' codec can't encode character [...]

    - by user1461135
    I have read the HOWTO on Unicode from the official docs and a full, very detailed article as well. Still I don't get it why it throws me this error. Here is what I attempt: I open an XML file that contains chars out of ASCII range (but inside allowed XML range). I do that with cfg = codecs.open(filename, encoding='utf-8, mode='r') which runs fine. Looking at the string with repr() also shows me a unicode string. Now I go ahead and read that with parseString(cfg.read().encode('utf-8'). Of course, my XML file starts with this: <?xml version="1.0" encoding="utf-8"?>. Although I suppose it is not relevant, I also defined utf-8 for my python script, but since I am not writing unicode characters directly in it, this should not apply here. Same for the following line: from __future__ import unicode_literals which also is right at the beginning. Next thing I pass the generated Object to my own class where I read tags into variables like this: xmldata.getElementsByTagName(tagName)[0].firstChild.data and assign it to a variable in my class. Now what perfectly works are those commands (obj is an instance of the class): for element in obj: print element And this command does work as well: print obj.__repr__() I defined __iter__() to just yield every variable while __repr__() uses the typical printf stuff: "%s" % self.varname Both commands print perfectly and can output the unicode character. What does not work is this: print obj And now I am stuck because this throws the dreaded UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 47: So what am I missing? What am I doing wrong? I am looking for a general solution, I always want to handle strings as unicode, just to avoid any possible errors and write a compatible program. Edit: I also defined this: def __str__(self): return self.__repr__() def __unicode__(self): return self.__repr__() From documentation I got that this

    Read the article

  • Django install on a shared host, .htaccess help

    - by redconservatory
    I am trying to install Django on a shared host using the following instructions: docs.google.com/View?docid=dhhpr5xs_463522g My problem is with the following line on my root .htaccess: RewriteRule ^(.*)$ /cgi-bin/wcgi.py/$1 [QSA,L] When I include this line I get a 500 error with almost all of my domains on this account. My cgi-bin directory is home/my-username/public_html/cgi-bin/ The wcgi.py file contains: #!/usr/local/bin/python import os, sys sys.path.insert(0, "/home/username/django/") sys.path.insert(0, "/home/username/django/projects") sys.path.insert(0, "/home/username/django/projects/newprojects") import django.core.handlers.wsgi os.chdir("/home/username/django/projects/newproject") # optional os.environ['DJANGO_SETTINGS_MODULE'] = "newproject.settings" def runcgi(): environ = dict(os.environ.items()) environ['wsgi.input'] = sys.stdin environ['wsgi.errors'] = sys.stderr environ['wsgi.version'] = (1,0) environ['wsgi.multithread'] = False environ['wsgi.multiprocess'] = True environ['wsgi.run_once'] = True application = django.core.handlers.wsgi.WSGIHandler() if environ.get('HTTPS','off') in ('on','1'): environ['wsgi.url_scheme'] = 'https' else: environ['wsgi.url_scheme'] = 'http' headers_set = [] headers_sent = [] def write(data): if not headers_set: raise AssertionError("write() before start_response()") elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set sys.stdout.write('Status: %s\r\n' % status) for header in response_headers: sys.stdout.write('%s: %s\r\n' % header) sys.stdout.write('\r\n') sys.stdout.write(data) sys.stdout.flush() def start_response(status,response_headers,exc_info=None): if exc_info: try: if headers_sent: # Re-raise original exception if headers sent raise exc_info[0], exc_info[1], exc_info[2] finally: exc_info = None # avoid dangling circular ref elif headers_set: raise AssertionError("Headers already set!") headers_set[:] = [status,response_headers] return write result = application(environ, start_response) try: for data in result: if data: # don't send headers until body appears write(data) if not headers_sent: write('') # send headers now if body was empty finally: if hasattr(result,'close'): result.close() runcgi() Only I changed the "username" to my username...

    Read the article

  • How to prevent BeautifulSoup from stripping lines

    - by Oli
    I'm trying to translate an online html page into text. I have a problem with this structure: <div align="justify"><b>Available in <a href="http://www.example.com.be/book.php?number=1"> French</a> and <a href="http://www.example.com.be/book.php?number=5"> English</a>. </div> Here is its representation as a python string: '<div align="justify"><b>Available in \r\n<a href="http://www.example.com.be/book.php?number=1">\r\nFrench</a>; \r\n<a href="http://www.example.com.be/book.php?number=5">\r\nEnglish</a>.\r\n</div>' When using: html_content = get_html_div_from_above() para = BeautifulSoup(html_content) txt = para.text BeautifulSoup translate it (in the 'txt' variable) as: u'Available inFrenchandEnglish.' It probably strips each line in the original html string. Do you have a clean solution about this problem ? Thanks.

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • Installing django on dreamhost (help a newb out)

    - by augustfirst
    I'm trying to get django running on my dreahost account. I've been trying to sort of use two tutorials at once: the one on the dreamhost wiki and the one in the django book. I installed django using the script on the wiki page, but I ran into trouble immediately while trying to work through the django book. It says: To start the server, change into your project directory (cd mysite), if you haven’t already, and run this command: python manage.py runserver This launches the server locally, on port 8000, accessible only to connections from your own computer. Now that it’s running, visit 127.0.0.1:8000 with your Web browser. You’ll see a “Welcome to Django” page shaded in a pleasant pastel blue. It worked! Those instructions seem to assume that you're developing locally, not on a shared server. Where the heck am I supposed to look for the "Welcome to Django" page after starting the server? In my webroot? No dice. Anyway, I tried to blunder ahead through the django book to its hello world tutorial (chapter 3). But once I've edited the view file and the URLconf, I don't get a nice clean "hello world" text. Instead (as you can see) I get an "import error". Any help would be greatly appreciated.

    Read the article

  • Is www.example.com/post/21/edit a RESTful URI? I think I know the answer, but have another question.

    - by tmadsen
    I'm almost afraid to post this question, there has to be an obvious answer I've overlooked, but here I go: Context: I am creating a blog for educational purposes (want to learn python and web.py). I've decided that my blog have posts, so I've created a Post class. I've also decided that posts can be created, read, updated, or deleted (so CRUD). So in my Post class, I've created methods that respond to POST, GET, PUT, and DELETE HTTP methods). So far so good. The current problem I'm having is a conceptual one, I know that sending a PUT HTTP message (with an edited Post) to, e.g., /post/52 should update post with id 52 with the body contents of the HTTP message. What I do not know is how to conceptually correctly serve the (HTML) edit page. Will doing it like this: /post/52/edit violate the idea of URI, as 'edit' is not a resource, but an action? On the other side though, could it be considered a resource since all that URI will respond to is a GET method, that will only return an HTML page? So my ultimate question is this: How do I serve an HTML page intended for user editing in a RESTful manner?

    Read the article

  • Fetching just the Key/id from a ReferenceProperty in App Engine

    - by ozone
    Hi SO, I could use a little help in AppEngine land... Using the [Python] API I create relationships like this example from the docs: class Author(db.Model): name = db.StringProperty() class Story(db.Model): author = db.ReferenceProperty(Author) story = db.get(story_key) author_name = story.author.name As I understand it, that example will make two datastore queries. One to fetch the Story and then one to deference the Author inorder to access the name. But I want to be able to fetch the id, so do something like: story = db.get(story_key) author_id = story.author.key().id() I want to just get the id from the reference. I do not want to have to deference (therefore query the datastore) the ReferenceProperty value. From reading the documentation it says that the value of a ReferenceProperty is a Key Which leads me to think that I could just call .id() on the reference's value. But it also says: The ReferenceProperty model provides features for Key property values such as automatic dereferencing. I can't find anything that explains when this referencing takes place? Is it safe to call .id() on the ReferenceProperty's value? Can it be assumed that calling .id() will not cause a datastore lookup?

    Read the article

  • Operating on rows and then on columns of a matrix produces code duplication

    - by Chetan
    I have the following (Python) code to check if there are any rows or columns that contain the same value: # Test rows -> # Check each row for a win for i in range(self.height): # For each row ... firstValue = None # Initialize first value placeholder for j in range(self.width): # For each value in the row if (j == 0): # If it's the first value ... firstValue = b[i][j] # Remember it else: # Otherwise ... if b[i][j] != firstValue: # If this is not the same as the first value ... firstValue = None # Reset first value break # Stop checking this row, there's no win here if (firstValue != None): # If first value has been set # First value placeholder now holds the winning player's code return firstValue # Return it # Test columns -> # Check each column for a win for i in range(self.width): # For each column ... firstValue = None # Initialize first value placeholder for j in range(self.height): # For each value in the column if (j == 0): # If it's the first value ... firstValue = b[j][i] # Remember it else: # Otherwise ... if b[j][i] != firstValue: # If this is not the same as the first value ... firstValue = None # Reset first value break # Stop checking this column, there's no win here if (firstValue != None): # If first value has been set # First value placeholder now holds the winning player's code return firstValue # Return it Clearly, there is a lot of code duplication here. How do I refactor this code? Thanks!

    Read the article

  • Ways of breaking down SQL transactional/call data into reports -- 'square data'?

    - by RizwanK
    I've got a large database of call-traffic information (although the question could be answered with any generic data set.) For instance, a row contains : call endpoint server (endpoint_name) call endpoint status (sip_disconnect_reason) call destination (destination) call completed (duration) [duration 0 is completed] call account group (account_group) It's pretty easy to run SQL reports against the data, i.e. select count(*), endpoint_name from calls where duration0 group by endpoint_name select count(*),destination from calls where blah group by destination I've been calling this filtering or breakdown reports (I get the number of calls per carrier, etc.). Add another breakdown, and you've got two breakdowns, a la select count(*), endpoint_name, sip_disconnect_reason from calls where duration=0 group by endpoint_name, sip_disconnect_reason Of course, if you keep adding breakdowns, you end up making super-large reports and slicing your data so thin that you can't extract any trends from it. So my question is this : Is there a name for this sort of method of report writing? (I've heard words like squares, slicing and breakdown reports applied to them) --- I'm looking for a Python/Reporting toolkit that I can use to make these easier to generate for my end users. aside : Are there other ways of representing transactional data that might be useful rather than the above method? Thanks,

    Read the article

  • Django: DatabaseError column does not exist

    - by Rosarch
    I'm having a problem with Django 1.2.4. Here is a model: class Foo(models.Model): # ... ftw = models.CharField(blank=True) bar = models.ForeignKey(Bar) Right after flushing the database, I use the shell: Python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from apps.foo.models import Foo >>> Foo.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 67, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 82, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 271, in iterator for row in compiler.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 677, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 732, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 15, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) DatabaseError: column foo_foo.bar_id does not exist LINE 1: ...t_omg", "foo_foo"."ftw", "foo_foo... What am I doing wrong here?

    Read the article

  • why does cx_oracle execute() not like my string now?

    - by Frank Stallone
    I've downloaded cx_oracle some time ago and wrote a script to convert data to XML. I've had to reisntall my OS and grabbed the latest version of cx_Oracle (5.0.3) and all of the sudden my code is broken. The first thing was that cx_Oracle.connect wanted unicode rather string for the username and password, that was very easy to fix. But now it keeps failing on the cursor.execute and tells me my string is not a string even when type() tells me it is a string. Here is a test script I initally used ages ago and worked fine on my old version but does not work on cx_Oracle now. import cx_Oracle ip = 'url.to.oracle' port = 1521 SID = 'mysid' dsn_tns = cx_Oracle.makedsn(ip, port, SID) connection = cx_Oracle.connect(u'name', u'pass', dsn_tns) cursor = connection.cursor() cursor.arraysize = 50 sql = "select isbn, title_code from core_isbn where rownum<=20" print type(sql) cursor.execute(sql) for isbn, title_code in cursor.fetchall(): print "Values from DB:", isbn, title_code cursor.close() connection.close() When I run that I get: Traceback (most recent call last): File "C:\NetBeansProjects\Python\src\db_temp.py", line 48, in cursor.execute(sql) TypeError: expecting None or a string Does anyone know what I may be doing wrong?

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >