Search Results

Search found 27655 results on 1107 pages for 'visual python'.

Page 518/1107 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • How to implement full text search in Django?

    - by Jannis
    I would like to implement a search function in a django blogging application. The status quo is that I have a list of strings supplied by the user and the queryset is narrowed down by each string to include only those objects that match the string. See: if request.method == "POST": form = SearchForm(request.POST) if form.is_valid(): posts = Post.objects.all() for string in form.cleaned_data['query'].split(): posts = posts.filter( Q(title__icontains=string) | Q(text__icontains=string) | Q(tags__name__exact=string) ) return archive_index(request, queryset=posts, date_field='date') Now, what if I didn't want do concatenate each word that is searched for by a logical AND but with a logical OR? How would I do that? Is there a way to do that with Django's own Queryset methods or does one have to fall back to raw SQL queries? In general, is it a proper solution to do full text search like this or would you recommend using a search engine like Solr, Whoosh or Xapian. What are there benefits? Thanks for taking the time

    Read the article

  • Email Validation from WTForm using Flask

    - by lost9123193
    I'm following a Flask tutorial from http://code.tutsplus.com/tutorials/intro-to-flask-adding-a-contact-page--net-28982 and am currently stuck on the validation step: The old version had the following: from flask.ext.wtf import Form, TextField, TextAreaField, SubmitField, validators, ValidationError class ContactForm(Form): name = TextField("Name", [validators.Required("Please enter your name.")]) email = TextField("Email", [validators.Required("Please enter your email address."), validators.Email("Please enter your email address.")]) submit = SubmitField("Send") Reading the comments I updated it to this: (replaced validators.Required with InputRequired) (same fields) class ContactForm(Form): name = TextField("Name", validators=[InputRequired('Please enter your name.')]) email = EmailField("Email", validators=[InputRequired("Please enter your email address.")]), validators.Email("Please enter your email address.")]) submit = SubmitField("Send") My only issue is I don't know what to do with the validators.Email. The error message I get is: NameError: name 'validators' is not defined I've looked over the documentation, perhaps I didn't delve deep enough but I can't seem to find an example for email validation.

    Read the article

  • Why is my implementation of the Sieve of Atkin overlooking numbers close to the specified limit?

    - by Ross G
    My implementation either overlooks primes near the limit or composites near the limit. while some limits work and others don't. I'm am completely confused as to what is wrong. def AtkinSieve (limit): results = [2,3,5] sieve = [False]*limit factor = int(math.sqrt(lim)) for i in range(1,factor): for j in range(1, factor): n = 4*i**2+j**2 if (n <= lim) and (n % 12 == 1 or n % 12 == 5): sieve[n] = not sieve[n] n = 3*i**2+j**2 if (n <= lim) and (n % 12 == 7): sieve[n] = not sieve[n] if i>j: n = 3*i**2-j**2 if (n <= lim) and (n % 12 == 11): sieve[n] = not sieve[n] for index in range(5,factor): if sieve[index]: for jndex in range(index**2, limit, index**2): sieve[jndex] = False for index in range(7,limit): if sieve[index]: results.append(index) return results For example, when I generate a primes to the limit of 1000, the Atkin sieve misses the prime 997, but includes the composite 965. But if I generate up the limit of 5000, the list it returns is completely correct.

    Read the article

  • Why is my implementation of the Sieve of Atkin overlooking numbers close to the specified limit?

    - by Ross G
    My implementation either overlooks primes near the limit or composites near the limit. while some limits work and others don't. I'm am completely confused as to what is wrong. def AtkinSieve (limit): results = [2,3,5] sieve = [False]*limit factor = int(math.sqrt(lim)) for i in range(1,factor): for j in range(1, factor): n = 4*i**2+j**2 if (n <= lim) and (n % 12 == 1 or n % 12 == 5): sieve[n] = not sieve[n] n = 3*i**2+j**2 if (n <= lim) and (n % 12 == 7): sieve[n] = not sieve[n] if i>j: n = 3*i**2-j**2 if (n <= lim) and (n % 12 == 11): sieve[n] = not sieve[n] for index in range(5,factor): if sieve[index]: for jndex in range(index**2, limit, index**2): sieve[jndex] = False for index in range(7,limit): if sieve[index]: results.append(index) return results For example, when I generate a primes to the limit of 1000, the Atkin sieve misses the prime 997, but includes the composite 965. But if I generate up the limit of 5000, the list it returns is completely correct.

    Read the article

  • vectorizing a for loop in numpy/scipy?

    - by user248237
    I'm trying to vectorize a for loop that I have inside of a class method. The for loop has the following form: it iterates through a bunch of points and depending on whether a certain variable (called "self.condition_met" below) is true, calls a pair of functions on the point, and adds the result to a list. Each point here is an element in a vector of lists, i.e. a data structure that looks like array([[1,2,3], [4,5,6], ...]). Here is the problematic function: def myClass: def my_inefficient_method(self): final_vector = [] # Assume 'my_vector' and 'my_other_vector' are defined numpy arrays for point in all_points: if not self.condition_met: a = self.my_func1(point, my_vector) b = self.my_func2(point, my_other_vector) else: a = self.my_func3(point, my_vector) b = self.my_func4(point, my_other_vector) c = a + b final_vector.append(c) # Choose random element from resulting vector 'final_vector' self.condition_met is set before my_inefficient_method is called, so it seems unnecessary to check it each time, but I am not sure how to better write this. Since there are no destructive operations here it is seems like I could rewrite this entire thing as a vectorized operation -- is that possible? any ideas how to do this?

    Read the article

  • Difference between Setting.settings and web.config?

    - by Muneeb
    This might sound a bit dumb. I always had this impression that web.config should store all settings which are suspect to change post-build and setting.settings should have the one which may change pre-build. but I have seen projects which had like connection string in setting.settings. Connection Strings should always been in web.config, shouldnt it? I am interested in a design perspective answer. Just a bit of background: My current scenario is that I am developing a web application with all the three tiers abstracted in three separate visual studio projects thus every tier has its own .settings and .config file.

    Read the article

  • How do I use m2crypto to validate a X509 certificate chain in a non-SSL setting

    - by Brock Pytlik
    I'm trying to figure out how to, using m2crypto, validate the chain of trust from a public key version of a X509 certificate back to one of a set of known root CA's when the chain may be arbitrarily long. The SSL.Context module looks promising except that I'm not doing this in the context of a SSL connection and I can't see how the information passed to load_verify_locations is used. Essentially, I'm looking for the interface that's equivalent to: openssl verify pub_key_x509_cert Is there something like that in m2crypto? Thanks.

    Read the article

  • queries in django

    - by Hulk
    How to query Employee to get all the address related to the employee, Employee.Add.all() doe not work.. class Employee(): Add = models.ManyToManyField(Address) parent = models.ManyToManyField(Parent, blank=True, null=True) class Address(models.Model): address_emp = models.CharField(max_length=512) description = models.TextField() def __unicode__(self): return self.name()

    Read the article

  • How to optimize my PageRank calculation?

    - by asmaier
    In the book Programming Collective Intelligence I found the following function to compute the PageRank: def calculatepagerank(self,iterations=20): # clear out the current PageRank tables self.con.execute("drop table if exists pagerank") self.con.execute("create table pagerank(urlid primary key,score)") self.con.execute("create index prankidx on pagerank(urlid)") # initialize every url with a PageRank of 1.0 self.con.execute("insert into pagerank select rowid,1.0 from urllist") self.dbcommit() for i in range(iterations): print "Iteration %d" % i for (urlid,) in self.con.execute("select rowid from urllist"): pr=0.15 # Loop through all the pages that link to this one for (linker,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid): # Get the PageRank of the linker linkingpr=self.con.execute("select score from pagerank where urlid=%d" % linker).fetchone()[0] # Get the total number of links from the linker linkingcount=self.con.execute("select count(*) from link where fromid=%d" % linker).fetchone()[0] pr+=0.85*(linkingpr/linkingcount) self.con.execute("update pagerank set score=%f where urlid=%d" % (pr,urlid)) self.dbcommit() However, this function is very slow, because of all the SQL queries in every iteration >>> import cProfile >>> cProfile.run("crawler.calculatepagerank()") 2262510 function calls in 136.006 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 136.006 136.006 <string>:1(<module>) 1 20.826 20.826 136.006 136.006 searchengine.py:179(calculatepagerank) 21 0.000 0.000 0.528 0.025 searchengine.py:27(dbcommit) 21 0.528 0.025 0.528 0.025 {method 'commit' of 'sqlite3.Connecti 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler 1339864 112.602 0.000 112.602 0.000 {method 'execute' of 'sqlite3.Connec 922600 2.050 0.000 2.050 0.000 {method 'fetchone' of 'sqlite3.Cursor' 1 0.000 0.000 0.000 0.000 {range} So I optimized the function and came up with this: def calculatepagerank2(self,iterations=20): # clear out the current PageRank tables self.con.execute("drop table if exists pagerank") self.con.execute("create table pagerank(urlid primary key,score)") self.con.execute("create index prankidx on pagerank(urlid)") # initialize every url with a PageRank of 1.0 self.con.execute("insert into pagerank select rowid,1.0 from urllist") self.dbcommit() inlinks={} numoutlinks={} pagerank={} for (urlid,) in self.con.execute("select rowid from urllist"): inlinks[urlid]=[] numoutlinks[urlid]=0 # Initialize pagerank vector with 1.0 pagerank[urlid]=1.0 # Loop through all the pages that link to this one for (inlink,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid): inlinks[urlid].append(inlink) # get number of outgoing links from a page numoutlinks[urlid]=self.con.execute("select count(*) from link where fromid=%d" % urlid).fetchone()[0] for i in range(iterations): print "Iteration %d" % i for urlid in pagerank: pr=0.15 for link in inlinks[urlid]: linkpr=pagerank[link] linkcount=numoutlinks[link] pr+=0.85*(linkpr/linkcount) pagerank[urlid]=pr for urlid in pagerank: self.con.execute("update pagerank set score=%f where urlid=%d" % (pagerank[urlid],urlid)) self.dbcommit() This function is 20 times faster (but uses a lot more memory for all the temporary dictionaries) because it avoids the unnecessary SQL queries in every iteration: >>> cProfile.run("crawler.calculatepagerank2()") 64802 function calls in 6.950 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.004 0.004 6.950 6.950 <string>:1(<module>) 1 1.004 1.004 6.946 6.946 searchengine.py:207(calculatepagerank2 2 0.000 0.000 0.104 0.052 searchengine.py:27(dbcommit) 23065 0.012 0.000 0.012 0.000 {meth 'append' of 'list' objects} 2 0.104 0.052 0.104 0.052 {meth 'commit' of 'sqlite3.Connection 1 0.000 0.000 0.000 0.000 {meth 'disable' of '_lsprof.Profiler' 31298 5.809 0.000 5.809 0.000 {meth 'execute' of 'sqlite3.Connectio 10431 0.018 0.000 0.018 0.000 {method 'fetchone' of 'sqlite3.Cursor' 1 0.000 0.000 0.000 0.000 {range} But is it possible to further reduce the number of SQL queries to speed up the function even more?

    Read the article

  • Iterating through a JSON object.

    - by user327508
    [ { "title": "Baby (Feat. Ludacris) - Justin Bieber", "description": "Baby (Feat. Ludacris) by Justin Bieber on Grooveshark", "link": "http://listen.grooveshark.com/s/Baby+Feat+Ludacris+/2Bqvdq", "pubDate": "Wed, 28 Apr 2010 02:37:53 -0400", "pubTime": 1272436673, "TinyLink": "http://tinysong.com/d3wI", "SongID": "24447862", "SongName": "Baby (Feat. Ludacris)", "ArtistID": "1118876", "ArtistName": "Justin Bieber", "AlbumID": "4104002", "AlbumName": "My World (Part II);\nhttp://tinysong.com/gQsw", "LongLink": "11578982", "GroovesharkLink": "11578982", "Link": "http://tinysong.com/d3wI" }, { "title": "Feel Good Inc - Gorillaz", "description": "Feel Good Inc by Gorillaz on Grooveshark", "link": "http://listen.grooveshark.com/s/Feel+Good+Inc/1UksmI", "pubDate": "Wed, 28 Apr 2010 02:25:30 -0400", "pubTime": 1272435930 } ] That is the current JSON object I have. I am now trying to iterate through it to get the import stuff like title and link. This is where I am having trouble I cant seem to get to the content that is past the ":" i tried doing dictionary way couldn't get it. def getLastSong(user,limit): base_url = 'http://gsuser.com/lastSong/' user_url = base_url + str(user) + '/' + str(limit) + "/" raw = urllib.urlopen(user_url) json_raw= raw.readlines() json_object = json.loads(json_raw[0]) #filtering and making it look good. gsongs = [] print json_object for song in json_object[0]: print song This code prints all the information before ":" Please help. ignore the Justin Bieber track :)

    Read the article

  • Problem with hash function: hash(1) == hash(1.0)

    - by mtasic
    I have an instance of dict with ints, floats, strings as keys, but the problem is when there are a as int and b as float, and float(a) == b, then their hash values are the same, and thats what I do NOT want to get because I need unique hash vales for this cases in order to get corresponding values. Example: d = {1:'1', 1.0:'1.0', '1':1, '1.0':1.0} d[1] == '1.0' d[1.0] == '1.0' d['1'] == 1 d['1.0'] == 1.0 What I need is: d = {1:'1', 1.0:'1.0', '1':1, '1.0':1.0} d[1] == '1' d[1.0] == '1.0' d['1'] == 1 d['1.0'] == 1.0

    Read the article

  • Disco/MapReduce: Using results of previous iteration as input to new iteration

    - by muckabout
    Currently am implementing PageRank on Disco. As an iterative algorithm, the results of one iteration are used as input to the next iteration. I have a large file which represents all the links, with each row representing a page and the values in the row representing the pages to which it links. For Disco, I break this file into N chunks, then run MapReduce for one round. As a result, I get a set of (page, rank) tuples. I'd like to feed this rank to the next iteration. However, now my mapper needs two inputs: the graph file, and the pageranks. I would like to "zip" together the graph file and the page ranks, such that each line represents a page, it's rank, and it's out links. Since this graph file is separated into N chunks, I need to split the pagerank vector into N parallel chunks, and zip the regions of the pagerank vectors to the graph chunks This all seems more complicated than necessary, and as a pretty straightforward operation (with the quintessential mapreduce algorithm), it seems I'm missing something about Disco that could really simplify the approach. Any thoughts?

    Read the article

  • Deploying Pylons with Nginx reverse proxy?

    - by resopollution
    Is there a tutorial on how to deploy Pylons with Nginx? I've been able to start nginx and then serve pylons to :8080 with paster serve development.ini However, I can't seem to do other stuff as pylons locks me into that serve mode. If I try to CTRL+Z out of pylons serving to do other stuff on my server, pylons goes down. There must be a different method of deployment. PS - I've done all this: http://wiki.pylonshq.com/display/pylonscookbook/Running+Pylons+with+NGINX?showComments=true#comments I just have no clue what to do with the Pylons app other than paster serve. Not sure if tehre is a different method.

    Read the article

  • Django urls on json request

    - by Hulk
    When making a django request through json as, var info=id + "##" +name+"##" $.post("/supervise/activity/" + info ,[] , function Handler(data,arr) { } In urls.py (r'^activity/(?P<info>\d+)/$, 'activity'), In views, def activity(request,info): print info The request does not go through.info is a string.How can this be resolved Thanks..

    Read the article

  • problem installing VS 2010 after uninstalling the RC

    - by rap-uvic
    Hi I uninstalled my VS 2010 RC to install VS 2010. However, it fails to install with the following error in the log files: d:\vs_setup.msi could not be opened. I've tried running the windows clean install and deleting any VS2010 files; also renamed the Microsoft Visual Studio 10.0 folder in the registry, but I keep getting the same error. I can see the vs_setup.msi on the dvd, it just won't allow me to run it directly. I have to run setup.exe. Any ideas?

    Read the article

  • Django QuerySet ordering by number of reverse ForeignKey matches

    - by msanders
    I have the following Django models: class Foo(models.Model): title = models.CharField(_(u'Title'), max_length=600) class Bar(models.Model): foo = models.ForeignKey(Foo) eg_id = models.PositiveIntegerField(_(u'Example ID'), default=0) I wish to return a list of Foo objects which have a reverse relationship with Bar objects that have a eg_id value contained in a list of values. So I have: id_list = [7, 8, 9, 10] qs = Foo.objects.filter(bar__eg_id__in=id_list) How do I order the matching Foo objects according to the number of related Bar objects which have an eg_id value in the id_list?

    Read the article

  • Do Precompiled headers help with rebuilds?

    - by brickner
    I read some of the questions about precompiled headers but couldn't find a direct answer to that. I usually rebuild my entire Visual Studio 2010 solution. One of the projects in my solution is a C++/CLI project. I thought that using precompiled headers in that project will increase the speed of the compilation. After some experiments, it seems that using precompiled headers only slows the rebuild process. Do precompiled headers only help with builds that didn't completely clean the old files?

    Read the article

  • 32bit 64bit referenced library

    - by bleevo
    Hi, I am developing an application that has two DLLs one is a 32bit version another is 64bit version, Client is 32bit Server is 64bit My question is is there a way I can say use the 32bit dll when doing Debug/Release and use 64bit dll when I perform a publish. I realize I can solve this problem using NAnt or MSBuild but was wondering if I can do any of this in visual studio. UPDATE All my code will run on either 32bit or 64bit but I am using a library that has a 32bit library and a 64bit library. 32bit wont work on server, 64bit wont work on dev machine

    Read the article

  • Error using httlib's HTTPSConnection with PKCS#12 certificate

    - by Remi Despres-Smyth
    Hello. I'm trying to use httplib's HTTPSConnection for client validation, using a PKCS #12 certificate. I know the certificate is good, as I can connect to the server using it in MSIE and Firefox. Here's my connect function (the certificate includes the private key). I've pared it down to just the basics: def connect(self, cert_file, host, usrname, passwd): self.cert_file = cert_file self.host = host self.conn = httplib.HTTPSConnection(host=self.host, port=self.port, key_file=cert_file, cert_file=cert_file) self.conn.putrequest('GET', 'pathnet/,DanaInfo=200.222.1.1+') self.conn.endheaders() retCreateCon = self.conn.getresponse() if is_verbose: print "Create HTTPS connection, " + retCreateCon.read() (Note: No comments on the hard-coded path, please - I'm trying to get this to work first; I'll make it pretty afterwards. The hard-coded path is correct, as I connect to it in MSIE and Firefox. I changed the IP address for the post.) When I try to run this using a PKCS#12 certificate (a .pfx file), I get back what appears to be an openSSL error. Here is the entire error traceback: File "Usinghttplib_Test.py", line 175, in t.connect(cert_file=opts["-keys"], host=host_name, usrname=opts["-username"], passwd=opts["-password"]) File "Usinghttplib_Test.py", line 40, in connect self.conn.endheaders() File "c:\python26\lib\httplib.py", line 904, in endheaders self._send_output() File "c:\python26\lib\httplib.py", line 776, in _send_output self.send(msg) File "c:\python26\lib\httplib.py", line 735, in send self.connect() File "c:\python26\lib\httplib.py", line 1112, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "c:\python26\lib\ssl.py", line 350, in wrap_socket suppress_ragged_eofs=suppress_ragged_eofs) File "c:\python26\lib\ssl.py", line 113, in __init__ cert_reqs, ssl_version, ca_certs) ssl.SSLError: [Errno 336265225] _ssl.c:337: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib Notice, the openSSL error (the last entry in the list) notes "PEM lib", which I found odd, since I'm not trying to use a PEM certificate. For kicks, I converted the PKCS#12 cert to a PEM cert, and ran the same code using that. In that case, I received no error, I was prompted to enter the PEM pass phrase, and the code did attempt to reach the server. (I received the response "The service is not available. Please try again later.", but I believe that would be because the server does not accept the PEM cert. I can't connect in Firefox to the server using the PEM cert either.) Is httplib's HTTPSConnection supposed to support PCKS#12 certificates? (That is, pfx files.) If so, why does it look like openSSL is trying to load it inside the PEM lib? Am I doing this all wrong? Any advice is welcome. EDIT: The certificate file contains both the certificate and the private key, which is why I'm providing the same file name for both the HTTPSConnection's key_file and cert_file parameters.

    Read the article

  • How to digitally sign a message with M2Crypto using the keys within a DER format certificate

    - by Pablo Santos
    Hi everyone. I am working on a project to implement digital signatures of outgoing messages and decided to use M2Crypto for that. I have a certificate (in DER format) from which I extract the keys to sign the message. For some reason I keep getting an ugly segmentation fault error when I call the "sign_update" method. Given the previous examples I have read here, I am clearly missing something. Here is the example I am working on: from M2Crypto.X509 import * cert = load_cert( 'certificate.cer', format=0 ) Pub_key = cert.get_pubkey() Pub_key.reset_context(md='sha1') Pub_key.sign_init() Pub_key.sign_update( "This should be good." ) print Pub_key.sign_final() Thanks in advance for the help, Pablo

    Read the article

  • Unable to launch the ASP.NET Development server because port '1900' is in use.

    - by Shaul
    I don't know what has got into my computer today. I was developing just fine in VS 2008 and testing my ASP.NET web site on my development server. Then suddenly, out of the blue, I can't run my web site any more! As soon as I hit F5, the message appears: Unable to launch the ASP.NET Development server because port '1900' is in use. And it doesn't matter what port I change to, it's always in use! AAARRRGGGHH!!! I have tried: Changing the port number Restarting Visual Studio Rebooting my machine Installing IIS Clue: My IIS refuses to start. But I didn't have IIS installed when I was happily working earlier, so that is probably not the issue; it might just be highlighting something else... Thanks in advance... Update: after rebooting, IIS does start, but the problem here persists.

    Read the article

  • Which c# project files should I version control?

    - by DTown
    I have a project I'm looking to manually manage via perforce version control as I only have the Express edition. What I'm looking for is which files should be excluded in the version control as locking many of the files can result in a problem for visual studio compiling and debugging. What I have, so far, included. .cs files (except properties folder) .resx files .csproj files Excluded bin folder obj folder Properties folder .user file Let me know if there is something more that should be included that I have excluded or if there is a better way to do this.

    Read the article

  • Adding an attribute to a class by using properties editor

    - by Fred Yang
    Visual studio allows you to design component visually. For example, you are designing a windows form. You change its property in the properties editor. The IDE will generate the code in a partial class in xx.designer.cs file. We can customize this behavior by changing the UITypeEditor for the properties. The question now is , Can we extend this code generation behavior? for example, we change a setting in property window, and then the IDE will add an .net Attribute to the class? Thanks

    Read the article

  • putpixel with pyglet

    - by pts
    I'm new to pyglet. I'd like to change a pixel from black to white at each on_draw iteration. So after 1000 iterations, there should be exactly 1000 white pixels in the window. However, I'd like to avoid calling 1000 draw operations in on_draw for that. So I'd like to create an image, do an RGB putpixel on the image, and blit the image to the screen. How can I do that? The pyglet documentation, the examples and the source code aren't too helpful on this.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >