Search Results

Search found 14201 results on 569 pages for 'python mock'.

Page 346/569 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Right clicking on QHeaderView inside of QTreeView

    - by taynaron
    I've written a descendant of QTreeView with multiple columns. I want to create a popup menu that appears whe nthe user right-clicks over the column headers. I have tried catching signals from QTreeView for this, but QTreeView doesn't seem to emit signals on the headers. QTreeView.header() does. I therefore believe I must either: 1: connect one of QHeaderView's signals to a popup function - I have been unable to find a signal that is triggered on a single right click - I have tried sectionClicked, sectionHandleDoubleClicked, sectionDoubleClicked, sectionPressed (not surprised the double click functions didn't catch a single right click - but they do catch a double right click) self.header().sectionClicked.connect(self.headerMenu) self.header().sectionHandleDoubleClicked.connect(self.headerMenu) self.header().sectionDoubleClicked.connect(self.headerMenu) self.header().sectionPressed.connect(self.headerMenu) or, 2: write a descendant of QHeaderView with my own MousePressEvent function, and use that for my headers. I have so far been unsuccessful in connecting the new header class to the QTreeView descendant. I keep getting a Segmentation Fault on runtime, with no more explanation. #in DiceView's init, where DiceHeaders is the QHeaderView descendant self.setHeader(DiceHeaders()) Any ideas?

    Read the article

  • openerp client customization

    - by iamgopal
    openerp client seems to be nice and working , i would like to hack it and use it as a front end to my open erp solution. but the documentation regarding client side design or customization is poor on openerp site , is there any good reference or documentation available for further digging in to openerp client side coding ? or more : if any similar client solution available that can be plug in to any back end system. ( i.e. rich internet client )

    Read the article

  • [numpy] storing record arrays in object arrays

    - by Peter Prettenhofer
    I'd like to convert a list of record arrays -- dtype is (uint32, float32) -- into a numpy array of dtype np.object: X = np.array(instances, dtype = np.object) where instances is a list of arrays with data type np.dtype([('f0', '<u4'), ('f1', '<f4')]). However, the above statement results in an array whose elements are also of type np.object: X[0] array([(67111L, 1.0), (104242L, 1.0)], dtype=object) Does anybody know why? The following statement should be equivalent to the above but gives the desired result: X = np.empty((len(instances),), dtype = np.object) X[:] = instances X[0] array([(67111L, 1.0), (104242L, 1.0), dtype=[('f0', '<u4'), ('f1', '<f4')]) thanks & best regards, peter

    Read the article

  • What algorithms are suitable for this simple machine learning problem?

    - by user213060
    I have a what I think is a simple machine learning question. Here is the basic problem: I am repeatedly given a new object and a list of descriptions about the object. For example: new_object: 'bob' new_object_descriptions: ['tall','old','funny']. I then have to use some kind of machine learning to find previously handled objects that had similar descriptions, for example, past_similar_objects: ['frank','steve','joe']. Next, I have an algorithm that can directly measure whether these objects are indeed similar to bob, for example, correct_objects: ['steve','joe']. The classifier is then given this feedback training of successful matches. Then this loop repeats with a new object. a Here's the pseudo-code: Classifier=new_classifier() while True: new_object,new_object_descriptions = get_new_object_and_descriptions() past_similar_objects = Classifier.classify(new_object,new_object_descriptions) correct_objects = calc_successful_matches(new_object,past_similar_objects) Classifier.train_successful_matches(object,correct_objects) But, there are some stipulations that may limit what classifier can be used: There will be millions of objects put into this classifier so classification and training needs to scale well to millions of object types and still be fast. I believe this disqualifies something like a spam classifier that is optimal for just two types: spam or not spam. (Update: I could probably narrow this to thousands of objects instead of millions, if that is a problem.) Again, I prefer speed when millions of objects are being classified, over accuracy. What are decent, fast machine learning algorithms for this purpose?

    Read the article

  • Installing PySide - OSX

    - by jeremynealbrown
    Anyone had success installing and using PySide on OSX? I am following the install instructions on the PySide site, though I'm running into issues building the API Extractor. I run cmake on the CMakeLists.txt file inside the api extractor dir and: This error is thrown- CMake Error at /Applications/CMake 2.8-0.app/Contents/share/cmake-2.8/Modules/FindBoost.cmake:894 (message): Unable to find the requested Boost libraries. Unable to find the Boost header files. Please set BOOST_ROOT to the root directory containing Boost or BOOST_INCLUDEDIR to the directory containing Boost's headers. Call Stack (most recent call first): CMakeLists.txt:5 (find_package) I am new to building source w/ cmake and I'm not event really sure what Boost is. Any light you might shed on the set up process would be great. Thanks

    Read the article

  • Programming an Event listener for files in a directory on Linux

    - by Epitaph
    On Ubuntu linux, when you watch a flash video, it gets saved temporarily in the /tmp as flv files while the video buffers. I use vlc to directly play these files. Currently, I have scripted a shortcut that directly scans and opens the latest file in /tmp with vlc, when clicked. But, I want to program a Java application that will continually monitor this /tmp directory for any new flv files, and open it in vlc automatically. I know I can use Runtime.exec() to open the VLC application with the flv files. But, I DO NOT want to run a while(true) loop (with sleep) to scan for files. How can I make use of Event Handling (Java or any other language) on Linux to complete this task?

    Read the article

  • How to get HTTP status message in (py)curl?

    - by mykhal
    spending some time studying pycurl and libcurl documentation, i still can't find a (simple) way, how to get HTTP status message (reason-phrase) in pycurl. status code is easy: import pycurl import cStringIO curl = pycurl.Curl() buff = cStringIO.StringIO() curl.setopt(pycurl.URL, 'http://example.org') curl.setopt(pycurl.WRITEFUNCTION, buff.write) curl.perform() print "status code: %s" % curl.getinfo(pycurl.HTTP_CODE) # -> 200 # print "status message: %s" % ??? # -> "OK"

    Read the article

  • Django Haystack exact filtering

    - by blackrobot
    I have a haystack search which has the following SearchIndex: class GrantIndex(indexes.SearchIndex): """ This provides the search index for the Grant application. """ text = indexes.CharField(document=True, use_template=True) year = indexes.IntegerField(model_attr='year__year') date = indexes.DateField(model_attr='date') program = indexes.CharField(model_attr='program__area') grantee = indexes.CharField(model_attr='grantee') amount = indexes.IntegerField(model_attr='amount') site.register(Grant, GrantIndex) If I want to search filtering out any programs that ARE NOT 'Health', I run the following query: from haystack.query import SearchQuerySet sqs = SearchQuerySet() sqs = sqs.filter(program='Health') Unfortunately, this also produces objects from the program 'Health\Other' and 'Health\Cardiovascular'. How do I stop the search from allowing those other programs in? I run Ubuntu 9.10 with Xapian as my search back-end.

    Read the article

  • Using numpy.apply

    - by andylei
    What's wrong with this snippet of code? import numpy as np from scipy import stats d = np.arange(10.0) cutoffs = [stats.scoreatpercentile(d, pct) for pct in range(0, 100, 20)] f = lambda x: np.sum(x > cutoffs) fv = np.vectorize(f) # why don't these two lines output the same values? [f(x) for x in d] # => [0, 1, 2, 2, 3, 3, 4, 4, 5, 5] fv(d) # => array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) Any ideas?

    Read the article

  • pyODBC and Unicode Problem

    - by Aviv Giladi
    Hey guys, I'm working with pyODBC communicate with a MS SQL 2005 Express server. The table to which i'm trying to save the data consists of nvarchar columns. query = u"INSERT INTO tblPersons (name, birthday, gender) VALUES('" query = query + name + u"', '" query = query + birthday + u"', '" query = query + gender + u"')" cur.execute(query ) The variables name, birthrday and gende are read from an Excel file and they are Unicode strings. When I execute the query and either look at the table with SQL Server Management Studio or execute a query that fetches the data that was just inserted, all the data that was written in a non-English languages turn into question marks. The data that was written in English is preserved and appears in the table in the correct way. I tried adding CHARSET=UTF16 to my connection string, but had no luck with that. I can use UTF-8 which works fine but as a working convention, I need all the data saved in my DB to be UTF16. Thanks!

    Read the article

  • Unexplained file not found for an existing file

    - by knishua
    Following is the error that occurs in this part of the code. Although the path is valid, a RuntimeError occurs—strange. What is happening, and how can I get this to work? for root,dirs,files in os.walk(self.path): for f in files : if (f.split('.')[1] == "mb"): z = utils.executeInMainThreadWithResult(self.contains,(f.split('.')[0])) if not (isinstance(z,NoneType)): cmds.symbolButton(self.arSubCategory + f.split('.')[0], image=(z[1].replace("\\","/")), width = 35,height = 70, c = "h.imp_file(" + "\"" + root.replace("\\","/") + "/" + f + "\"" + ")") def contains(self,imageName): print 'imageName : ',imageName,'\n' for root, dirs, files in os.walk(self.path+"images"): for g in files: x = re.search(imageName,g) if not (isinstance(x, NoneType)): print 'g ',root+"/"+g.replace("\\","/"),'\n' return (1,(root+"/"+g)) Error: # z is (1, 'T:/Reference_Library/Reference_work/Char_models/Workfiles/images\\rboxdisk1\\female\\highpoly/granny01_highpoly.jpg') Error: File not found: T:/Reference_Library/Reference_work/Char_models/Workfiles/images/rboxdisk1/female/highpoly/granny01_highpoly.jpg Traceback (most recent call last): File "<maya console>", line 115, in <module> File "<maya console>", line 65, in showWindowanimLibrary RuntimeError: File not found: T:/Reference_Library/Reference_work/Char_models/Workfiles/images/rboxdisk1/female/highpoly/granny01_highpoly.jpg

    Read the article

  • how to load an image to a grid using pygame, instead of just using a fill color?

    - by yao jiang
    I am trying to create a "map of a city" using pygame. I want to be able to put images of buildings in specific grid coords rather than just filling them in with a color. This is how I am creating this map grid: def clear(): for r in range(rows): for c in range(rows): if r%3 == 1 and c%3 == 1: color = brown; grid[r][c] = 1; else: color = white; grid[r][c] = 0; pygame.draw.rect(screen, color, [(margin+width)*c+margin, (margin+height)*r+margin, width, height]) pygame.display.flip(); Now how do I put images of buildings in those brown colored grids at those specific locations? I've tried some of the samples online but can't seem to get them to work. Any help is appreciated. If anyone have a good source for free sprites that I can use for pygame, please let me know. Thanks!

    Read the article

  • Making HTTP POST request

    - by infrared
    I'm trying to make a POST request to retrieve information about a book. Here is the code that returns HTTP code: 302, Moved import httplib, urllib params = urllib.urlencode({ 'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' }) headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} conn = httplib.HTTPConnection("bkstr.com:80") conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() When I try from a browser, from this page: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828 , it works. What am I missing in my code? Thanks EDIT: Here's what I get when I call print response.msg 302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT Vary: Host,Accept-Encoding,User-Agent Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Content-Type: text/plain; charset=utf-8 Seems that the location points to the same url I'm trying to access in the first place? EDIT2: I've tried using urllib2 as suggested here. Here is the code: import urllib, urllib2 url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch' values = {'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' } data = urllib.urlencode(values) req = urllib2.Request(url, data) response = urllib2.urlopen(req) print response.geturl() print response.info() the_page = response.read() print the_page And here is the output: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch Date: Tue, 07 Sep 2010 16:58:35 GMT Pragma: No-cache Cache-Control: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/ Vary: Accept-Encoding,User-Agent X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Connection: close Content-Type: text/html; charset=utf-8 Content-Language: en-US Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/

    Read the article

  • pycurl script can't login to website

    - by The Jug
    I'm currently trying to get a grasp on pycurl. I'm attempting to login to my own website. After logging into the site it should redirect to the main page. However when trying this script it just gets returned to the login page. What might I be doing wrong? import pycurl import urllib import StringIO import pycurl pf = {'username' : 'user', 'password' : 'pass' } fields = urllib.urlencode(pf) pageContents = StringIO.StringIO() p = pycurl.Curl() p.setopt(pycurl.FOLLOWLOCATION, 1) p.setopt(pycurl.COOKIEFILE, './cookie_test.txt') p.setopt(pycurl.COOKIEJAR, './cookie_test.txt') p.setopt(pycurl.POST, 1) p.setopt(pycurl.POSTFIELDS, fields) p.setopt(pycurl.WRITEFUNCTION, pageContents.write) p.setopt(pycurl.URL, 'http://localhost') p.perform() pageContents.seek(0) print pageContents.readlines()

    Read the article

  • is there a way to generate pdf containing non-ascii symbols with pisa from django template?

    - by mihailt
    Hi. i'm trying to generate a pdf from template using this snippet: def write_pdf(template_src, context_dict): template = get_template(template_src) context = Context(context_dict) html = template.render(context) result = StringIO.StringIO() pdf = pisa.pisaDocument(StringIO.StringIO(html.encode("UTF-8")), result) if not pdf.err: return http.HttpResponse(result.getvalue(), mimetype='application/pdf') except Exception('PDF error') but all non-latin symbols are not showing correctly, the template and view are saved using utf-8 encoding. i've tried saving view as ANSI and then to user unicode(html,"UTF-8"), but it throws TypeError. Also i thought that maybe it's because the default fonts somehow do not support utf-8 so according to pisa documentation i tried to set fontface in template body in style section. that still gave no results. Does any one have some ideas how to solve this issue?

    Read the article

  • Downloading a Directory Tree with FTPLIB

    - by Anthony Lemmer
    I'd like to download a directory and all of its contents to the local HD. Here's the code I have thus far (crashes if there's a sub-directory, else grabs all the files): import ftplib import configparser import os def runBackups(): #Load INI filename = 'connections.ini' config = configparser.SafeConfigParser() config.read(filename) connections = config.sections() i = 0 while i < len(connections): #Load Settings uri = config.get(connections[i], "uri") username = config.get(connections[i], "username") password = config.get(connections[i], "password") backupPath = config.get(connections[i], "backuppath") archiveTo = config.get(connections[i], "archiveto") #Start Back-ups ftp = ftplib.FTP(uri) ftp.login(username, password) ftp.set_debuglevel(2) ftp.cwd(backupPath) files = ftp.nlst() for filename in files: ftp.retrbinary('RETR %s' % filename, open(os.path.join(archiveTo, filename), 'wb').write) ftp.quit() i += 1 print() print("Back-ups complete.") print()

    Read the article

  • Is this the correct way to convert a UTC datetime string into localtime?

    - by Steve
    Is this the correct way to convert a UTC string into local time allowing for daylight savings? It looks ok to me but you never know :) import time UTC_STRING = "2010-03-25 02:00:00" stamp = time.mktime(time.strptime(UTC_STRING,"%Y-%m-%d %H:%M:%S")) stamp -= time.timezone now = time.localtime() if now[8] == 1: stamp += 60*60 elif now[8] == -1: stamp -= 60*60 print 'UTC: ', time.gmtime(stamp) print 'Local: ', time.localtime(stamp) --- Results from New Zealand (GMT+12 dst=1) --- UTC: (2010, 3, 25, 2, 0, 0, 3, 84, 0) Local: (2010, 3, 25, 15, 0, 0, 3, 84, 1)

    Read the article

  • Bug when drawing a QImage on a widget with PIL and PyQt

    - by oulipo
    I'm trying to write a small graphic application, and I need to construct some image using PIL that I show in a widget. The image is correctly constructed (I can check with im.show()), I can convert it to a QImage, that I can save normally to disk (using QImage.save), but if I try to draw it directly on my QWidget, it only show a white square. Here I commented out the code that is not working (converting the Image into QImage then QPixmap result in a white square), and I made a dirty hack to save the image to a temporary file and load it directly in a QPixmap, which work but is not what I want to do https://gist.github.com/f6d479f286ad75bf72b7 Someone has an idea? If it can help, when I try to save my QImage in a BMP file, I can access its content, but if I try to save it to a PNG it is completely white

    Read the article

  • Problem trying to achieve a join using the `comments` contrib in Django

    - by NiKo
    Hi, Django rookie here. I have this model, comments are managed with the django_comments contrib: class Fortune(models.Model): author = models.CharField(max_length=45, blank=False) title = models.CharField(max_length=200, blank=False) slug = models.SlugField(_('slug'), db_index=True, max_length=255, unique_for_date='pub_date') content = models.TextField(blank=False) pub_date = models.DateTimeField(_('published date'), db_index=True, default=datetime.now()) votes = models.IntegerField(default=0) comments = generic.GenericRelation( Comment, content_type_field='content_type', object_id_field='object_pk' ) I want to retrieve Fortune objects with a supplementary nb_comments value for each, counting their respectve number of comments ; I try this query: >>> Fortune.objects.annotate(nb_comments=models.Count('comments')) From the shell: >>> from django_fortunes.models import Fortune >>> from django.db.models import Count >>> Fortune.objects.annotate(nb_comments=Count('comments')) [<Fortune: My first fortune, from NiKo>, <Fortune: Another One, from Dude>, <Fortune: A funny one, from NiKo>] >>> from django.db import connection >>> connection.queries.pop() {'time': '0.000', 'sql': u'SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21'} Below is the properly formatted sql query: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 Can you spot the problem? Django won't LEFT JOIN the django_comments table with the content_type data (which contains a reference to the fortune one). This is the kind of query I'd like to be able to generate using the ORM: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") LEFT OUTER JOIN "django_content_type" ON ("django_comments"."content_type_id" = "django_content_type"."id") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 But I don't manage to do it, so help from Django veterans would be much appreciated :) Hint: I'm using Django 1.2-DEV Thanks in advance for your help.

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after storing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • Django TestCase testing order

    - by ziang
    If there are several methods in the test class, I found that the order to execute is alphabetical. But I want to customize the order of execution. How to define the execution order? For example: testTestA will be loaded first than testTestB. class Test(TestCase): def setUp(self): ... def testTestB(self): #test code def testTestA(self): #test code

    Read the article

  • Creating a QuerySet based on a ManyToManyField in Django

    - by River Tam
    So I've got two classes; Picture and Tag that are as follows: class Tag(models.Model): pics = models.ManyToManyField('Picture', blank=True) name = models.CharField(max_length=30) # stuff omitted class Picture(models.Model): name = models.CharField(max_length=100) pub_date = models.DateTimeField('date published') tags = models.ManyToManyField('Tag', blank=True) content = models.ImageField(upload_to='instaton') #stuff omitted And what I'd like to do is get a queryset (for a ListView) given a tag name that contains the most recent X number of Pictures that are tagged as such. I've looked up very similar problems, but none of the responses make any sense to me at all. How would I go about creating this queryset?

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >