Search Results

Search found 23 results on 1 pages for 'classmethod'.

Page 1/1 | 1 

  • testing existing attribute of a @classmethod function, yields AttributeError

    - by alex
    i have a function which is a class method, and i want to test a attribute of the class which may or may not be None, but will exist always. class classA(): def __init__(self, var1, var2 = None): self.attribute1 = var1 self.attribute2 = var2 @classmethod def func(self,x): if self.attribute2 is None: do something i get the error AttributeError: class classA has no attribute 'attributeB' when i access the attribute like i showed but if on command line i can see it works, x = classA() x.attributeB is None True so the test works. if i remove the @classmethod decorator from func, the problem disapears. if i leave the @classmethod decorator, it only seems to affect variables which are supplied default values in the super-class's constructor. whats going on in the above code?

    Read the article

  • Invoking a superclass's class methods in Python

    - by LeafStorm
    I am working on a Flask extension that adds CouchDB support to Flask. To make it easier, I have subclassed couchdb.mapping.Document so the store and load methods can use the current thread-local database. Right now, my code looks like this: class Document(mapping.Document): # rest of the methods omitted for brevity @classmethod def load(cls, id, db=None): return mapping.Document.load(cls, db or g.couch, id) I left out some for brevity, but that's the important part. However, due to the way classmethod works, when I try to call this method, I receive the error message File "flaskext/couchdb.py", line 187, in load return mapping.Document.load(cls, db or g.couch, id) TypeError: load() takes exactly 3 arguments (4 given) I tested replacing the call with mapping.Document.load.im_func(cls, db or g.couch, id), and it works, but I'm not particularly happy about accessing the internal im_ attributes (even though they are documented). Does anyone have a more elegant way to handle this?

    Read the article

  • Ruby metaclass madness

    - by t6d
    I'm stuck. I'm trying to dynamically define a class method and I can't wrap my head around the ruby metaclass model. Consider the following class: class Example def self.meta; (class << self; self; end); end def self.class_instance; self; end end Example.class_instance.class # => Class Example.meta.class # => Class Example.class_instance == Example # => true Example.class_instance == Example.meta # => false Obviously both methods return an instance of Class. But these two instances are not the same. They also have different ancestors: Example.meta.ancestors # => [Class, Module, Object, Kernel] Example.class_instance.ancestors # => [Example, Object, Kernel] Whats the point in making a difference between the metaclass and the class instance? I figured out, that I can send :define_method to the metaclass to dynamically define a method, but if I try to send it to the class instance it won't work. At least I could solve my problem, but I still want to understand why it is working this way.

    Read the article

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • Simple Observation in Django: How Can I Correctly Modify The `attrs` sent to __new__ of a Django Mod

    - by DGGenuine
    Hello, I'm a strong proponent of the observer pattern, and this is what I'd like to be able to do in my Django models.py: class AModel(Model): __metaclass__ = SomethingMagical @post_save(AnotherModel) @classmethod def observe_another_model_saved(klass, sender, instance, created, **kwargs): pass @pre_init('YetAnotherModel') @classmethod def observe_yet_another_model_initializing(klass, sender, *args, **kwargs): pass @post_delete('DifferentApp.SomeModel') @classmethod def observe_some_model_deleted(klass, sender, **kwargs): pass This would connect a signal with sender = the decorator's argument and receiver = the decorated method. Right now my signal connection code all exists in __init__.py which is okay, but a little unmaintainable. I want this code all in one place, the models.py file. Thanks to helpful feedback from the community I'm very close (I think.) (I'm using a metaclass solution instead of the class decorator solution in the previous question/answer because you can't set attributes on classmethods, which I need.) I am having a strange error I don't understand. At the end of my post are the contents of a models.py that you can pop into a fresh project/application to see the error. Set your database to sqlite and add the application to installed apps. This is the error: Validating models... Unhandled exception in thread started by Traceback (most recent call last): File "/Library/Python/2.6/site-packages//lib/python2.6/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 253, in validate raise CommandError("One or more models did not validate:\n%s" % error_text) django.core.management.base.CommandError: One or more models did not validate: local.myothermodel: 'my_model' has a relation with model MyModel, which has either not been installed or is abstract. I've indicated a few different things you can comment in/out to fix the error. First, if you don't modify the attrs sent to the metaclass's __new__, then the error does not arise. (Note even if you copy the dictionary element by element into a new dictionary, it still fails; only using the exact attrs dictionary works.) Second, if you reference the first model by class rather than by string, the error also doesn't arise regardless of what you do in __new__. I appreciate your help. I'll be githubbing the solution if and when it works. Maybe other people would enjoy a simplified way to use Django signals to observe application happenings. #models.py from django.db import models from django.db.models.base import ModelBase from django.db.models import signals import pdb class UnconnectedMethodWrapper(object): sender = None method = None signal = None def __init__(self, signal, sender, method): self.signal = signal self.sender = sender self.method = method def post_save(sender): return _make_decorator(signals.post_save, sender) def _make_decorator(signal, sender): def decorator(view): return UnconnectedMethodWrapper(signal, sender, view) return decorator class ConnectableModel(ModelBase): """ A meta class for any class that will have static or class methods that need to be connected to signals. """ def __new__(cls, name, bases, attrs): unconnecteds = {} ## NO WORK newattrs = {} for name, attr in attrs.iteritems(): if isinstance(attr, UnconnectedMethodWrapper): unconnecteds[name] = attr newattrs[name] = attr.method #replace the UnconnectedMethodWrapper with the method it wrapped. else: newattrs[name] = attr ## NO WORK # newattrs = {} # for name, attr in attrs.iteritems(): # newattrs[name] = attr ## WORKS # newattrs = attrs new = super(ConnectableModel, cls).__new__(cls, name, bases, newattrs) for name, unconnected in unconnecteds.iteritems(): _connect_signal(unconnected.signal, unconnected.sender, getattr(new, name), new._meta.app_label) return new def _connect_signal(signal, sender, receiver, default_app_label): # full implementation also accepts basestring as sender and will look up model accordingly signal.connect(sender=sender, receiver=receiver) class MyModel(models.Model): __metaclass__ = ConnectableModel @post_save('In my application this string matters') @classmethod def observe_it(klass, sender, instance, created, **kwargs): pass @classmethod def normal_class_method(klass): pass class MyOtherModel(models.Model): ## WORKS # my_model = models.ForeignKey(MyModel) ## NO WORK my_model = models.ForeignKey('MyModel')

    Read the article

  • in python: can i pass class method as and a default argument to another class method

    - by alex
    i want to to pass class method as and a default argument to another class method, so that i can re-use the method as a @classmethod @classmethod class foo: def func1(self,x): do somthing; def func2(self, aFunc = self.func1): # make some a call to afunc afunc(4) this why when the method func2 is called within the class aFunc defaults to self.func1, but i can call this same function from outside of the class and pass it a different function at the input. i get NameError: name 'self' is not defined

    Read the article

  • Python overriding class (not instance) special methods

    - by André
    How do I override a class special method? I want to be able to call the __str__() method of the class without creating an instance. Example: class Foo: def __str__(self): return 'Bar' class StaticFoo: @staticmethod def __str__(): return 'StaticBar' class ClassFoo: @classmethod def __str__(cls): return 'ClassBar' if __name__ == '__main__': print(Foo) print(Foo()) print(StaticFoo) print(StaticFoo()) print(ClassFoo) print(ClassFoo()) produces: <class '__main__.Foo'> Bar <class '__main__.StaticFoo'> StaticBar <class '__main__.ClassFoo'> ClassBar should be: Bar Bar StaticBar StaticBar ClassBar ClassBar Even if I use the @staticmethod or @classmethod the __str__ is still using the built in python definition for __str__. It's only working when it's Foo().__str__() instead of Foo.__str__().

    Read the article

  • Building Dynamic Classes in Objective C

    - by kjell_
    I'm a somewhat competent ruby programmer. Yesterday I decided to finally try my hand with Apple's Cocoa frameworks. Help me see things the ObjC way? I'm trying to get my head around objc_allocateClassPair and objc_registerClassPair. My goal is to dynamically generate a few classes and then be able to use them as I would any other class. Does this work in Obj C? Having allocated and registered class A, I get a compile error when calling [[A alloc] init]; (it says 'A' Undeclared). I can only instantiate A using runtime's objc_getClass method. Is there any way to tell the compiler about A and pass it messages like I would NSString? A compiler flag or something? I have 10 or so other classes (B, C, …), all with the same superclass. I want to message them directly in code ([A classMethod], [B classMethod], …) without needing objc_getClass. Am I trying to be too dynamic here or just botching my implementation? It looks something like this… NSString *name = @"test"; Class newClass = objc_allocateClassPair([NSString class], [name UTF8String], 0); objc_registerClassPair(newClass); id a = [[objc_getClass("A") alloc] init]; NSLog(@"new class: %@ superclass: %@", a, [a superclass]); //[[A alloc] init]; blows up.

    Read the article

  • methods of metaclasses on class instances.

    - by Stefano Borini
    I was wondering what happens to methods declared on a metaclass. I expected that if you declare a method on a metaclass, it will end up being a classmethod, however, the behavior is different. Example >>> class A(object): ... @classmethod ... def foo(cls): ... print "foo" ... >>> a=A() >>> a.foo() foo >>> A.foo() foo However, if I try to define a metaclass and give it a method foo, it seems to work the same for the class, not for the instance. >>> class Meta(type): ... def foo(self): ... print "foo" ... >>> class A(object): ... __metaclass__=Meta ... def __init__(self): ... print "hello" ... >>> >>> a=A() hello >>> A.foo() foo >>> a.foo() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'A' object has no attribute 'foo' What's going on here exactly ? edit: bumping the question

    Read the article

  • Unique Constraint At Data Level in GAE

    - by Ngu Soon Hui
    It seems that the unique constraint is not natively supported in GAE, although one can enforce unique check before putting an object to store. But that was in January 2009, what about now? Can I specify unique constraint on a column during schema creation? i.e. class Account(db.Model): name = db.StringProperty() email = db.StringProperty() as unique # something like this @classmethod def create(cls, name, email): a = Account(name=name, email=email) a.put() return a

    Read the article

  • @staticmethod vs module-level function

    - by darkfeline
    This is not about @staticmethod and @classmethod! I know how staticmethod works. What I want to know is the proper use cases for @staticmethod vs. a module-level function. I've googled this question, and it seems there's some general agreement that module-level functions are preferred over static methods because it's more pythonic. Static methods have the advantage of being bound to its class, which may make sense if only that class uses it. However, in Python functionality is usually organized by module not class, so usually making it a module function makes sense too. Static methods can also be overridden by subclasses, which is an advantage or disadvantage depending on how you look at it. Although, static methods are usually "functionally pure" so overriding it may not be smart, but it may be convenient sometimes (though this may be one of those "convenient, but NEVER DO IT" kind of things only experience can teach you). Are there any general rule-of-thumbs for using either staticmethod or module-level functions? What concrete advantages or disadvantages do they have (e.g. future extension, external extension, readability)? If possible, also provide a case example.

    Read the article

  • class __init__ (not instance __init__)

    - by wallacoloo
    Here's a very simple example of what I'm trying to get around: class Test(object): some_dict = {Test: True} The problem is that I cannot refer to Test while it's still being defined Normally, I'd just do this: class Test(object): def __init__(self): self.__class__.some_dict = {Test: True} But I never create an instance of this class. It's really just a container to hold a group of related functions and data (I have several of these classes, and I pass around references to them, so it is necessary for Test to be it's own class) So my question is, how could I refer to Test while it's being defined, or is there something similar to __init__ that get's called as soon as the class is defined? If possible, I want self.some_dict = {Test: True} to remain inside the class definition. This is the only way I know how to do this so far: class Test(object): @classmethod def class_init(cls): cls.some_dict = {Test: True} Test.class_init()

    Read the article

  • Problem about python import with error

    - by xiao
    Hello, I have write a small python module with one class and two functions. The skeleton of the module is as following: #file name: test_module.py class TestClass: @classmethod def method1(cls, param1): #to do something pass def __init__(self, param1): #to do something ... def fun1(*params): #to do something ... def fun2(*params): #to do something ... Another py file is a small script which imports function and class from the module, as following: import sys from test_module import TestClass, fun1, fun2 def main(sys_argv): li = range(5) inst1 = TestClass(li) fun1(inst1) fun2(inst1) return if __name__ == "__main__": main(sys.argv) But when I execute the script, it is broken with following message: ./script.py: line 4: syntax error near unexpected token `(' ./script.py: line 4: `def main(sys_argv):' I am not sure what the problem is. Is it a problem with import? But when I try to import the module in ipython, everything is just ok.

    Read the article

  • Creating a method that is simultaneously an instance and class method

    - by Mike Axiak
    In Python, I'd like to be able to create a function that behaves both as a class function and an instance method, but with the ability to change behaviors. The use case for this is for a set of serializable objects and types. As an example: >>> class Thing(object): #... >>> Thing.to_json() 'A' >>> Thing().to_json() 'B' I know that given the definition of classmethod() in funcobject.c in the Python source, this looks like it'd be simple with a C module. Is there a way to do this from within python? Thanks!

    Read the article

  • Exposing a "dumbed-down", read-only instance of a Model in GAE

    - by Blixt
    Does anyone know a clever way, in Google App Engine, to return a wrapped Model instance that only exposes a few of the original properties, and does not allow saving the instance back to the datastore? I'm not looking for ways of actually enforcing these rules, obviously it'll still be possible to change the instance by digging through its __dict__ etc. I just want a way to avoid accidental exposure/changing of data. My initial thought was to do this (I want to do this for a public version of a User model): class ReadOnlyUser(db.Model): display_name = db.StringProperty() @classmethod def kind(cls): return 'User' def put(self): raise SomeError() Unfortunately, GAE maps the kind to a class early on, so if I do ReadOnlyUser.get_by_id(1) I will actually get a User instance back, not a ReadOnlyUser instance.

    Read the article

  • Python metaclass to run a class method automatically on derived class

    - by Barry Steyn
    I want to automatically run a class method defined in a base class on any derived class during the creation of the class. For instance: class Base(object): @classmethod def runme(): print "I am being run" def __metclass__(cls,parents,attributes): clsObj = type(cls,parents,attributes) clsObj.runme() return clsObj class Derived(Base): pass: What happens here is that when Base is created, ''runme()'' will fire. But nothing happens when Derived is created. The question is: How can I make ''runme()'' also fire when creating Derived. This is what I have thought so far: If I explicitly set Derived's metclass to Base's, it will work. But I don't want that to happen. I basically want Derived to use the Base's metaclass without me having to explicitly set it so.

    Read the article

  • What's the best way to create a static utility class in python? Is using metaclasses code smell?

    - by rsimp
    Ok so I need to create a bunch of utility classes in python. Normally I would just use a simple module for this but I need to be able to inherit in order to share common code between them. The common code needs to reference the state of the module using it so simple imports wouldn't work well. I don't like singletons, and classes that use the classmethod decorator do not have proper support for python properties. One pattern I see used a lot is creating an internal python class prefixed with an underscore and creating a single instance which is then explicitly imported or set as the module itself. This is also used by fabric to create a common environment object (fabric.api.env). I've realized another way to accomplish this would be with metaclasses. For example: #util.py class MetaFooBase(type): @property def file_path(cls): raise NotImplementedError def inherited_method(cls): print cls.file_path #foo.py from util import * import env class MetaFoo(MetaFooBase): @property def file_path(cls): return env.base_path + "relative/path" def another_class_method(cls): pass class Foo(object): __metaclass__ = MetaFoo #client.py from foo import Foo file_path = Foo.file_path I like this approach better than the first pattern for a few reasons: First, instantiating Foo would be meaningless as it has no attributes or methods, which insures this class acts like a true single interface utility, unlike the first pattern which relies on the underscore convention to dissuade client code from creating more instances of the internal class. Second, sub-classing MetaFoo in a different module wouldn't be as awkward because I wouldn't be importing a class with an underscore which is inherently going against its private naming convention. Third, this seems to be the closest approximation to a static class that exists in python, as all the meta code applies only to the class and not to its instances. This is shown by the common convention of using cls instead of self in the class methods. As well, the base class inherits from type instead of object which would prevent users from trying to use it as a base for other non-static classes. It's implementation as a static class is also apparent when using it by the naming convention Foo, as opposed to foo, which denotes a static class method is being used. As much as I think this is a good fit, I feel that others might feel its not pythonic because its not a sanctioned use for metaclasses which should be avoided 99% of the time. I also find most python devs tend to shy away from metaclasses which might affect code reuse/maintainability. Is this code considered code smell in the python community? I ask because I'm creating a pypi package, and would like to do everything I can to increase adoption.

    Read the article

  • How to access a superclass's class attributes in Python?

    - by Brecht Machiels
    Have a look at the following code: class A(object): defaults = {'a': 1} def __getattr__(self, name): print('A.__getattr__') return self.get_default(name) @classmethod def get_default(cls, name): # some debug output print('A.get_default({}) - {}'.format(name, cls)) try: print(super(cls, cls).defaults) # as expected except AttributeError: #except for the base object class, of course pass # the actual function body try: return cls.defaults[name] except KeyError: return super(cls, cls).get_default(name) # infinite recursion #return cls.__mro__[1].get_default(name) # this works, though class B(A): defaults = {'b': 2} class C(B): defaults = {'c': 3} c = C() print('c.a =', c.a) I have a hierarchy of classes each with its own dictionary containing some default values. If an instance of a class doesn't have a particular attribute, a default value for it should be returned instead. If no default value for the attribute is contained in the current class's defaults dictionary, the superclass's defaults dictionary should be searched. I'm trying to implement this using the recursive class method get_default. The program gets stuck in an infinite recursion, unfortunately. My understanding of super() is obviously lacking. By accessing __mro__, I can get it to work properly though, but I'm not sure this is a proper solution. I have the feeling the answer is somewhere in this article, but I haven't been able to find it yet. Perhaps I need to resort to using a metaclass?

    Read the article

  • How to create instances of a class from a static method?

    - by Pierre
    Hello. Here is my problem. I have created a pretty heavy readonly class making many database calls with a static "factory" method. The goal of this method is to avoid killing the database by looking in a pool of already-created objects if an identical instance of the same object (same type, same init parameters) already exists. If something was found, the method will just return it. No problem. But if not, how may I create an instance of the object, in a way that works with inheritance? >>> class A(Object): >>> @classmethod >>> def get_cached_obj(self, some_identifier): >>> # Should do something like `return A(idenfier)`, but in a way that works >>> class B(A): >>> pass >>> A.get_cached_obj('foo') # Should do the same as A('foo') >>> A().get_cached_obj('foo') # Should do the same as A('foo') >>> B.get_cached_obj('bar') # Should do the same as B('bar') >>> B().get_cached_obj('bar') # Should do the same as B('bar') Thanks.

    Read the article

  • how to fetch more than 1000 entities NON keybased?

    - by user291071
    If I should be approaching this problem through a different method, please suggest so. I am creating an item based collaborative filter. I populate the db with the LinkRating2 class and for each link there are more than a 1000 users that I need to call and collect their ratings to perform calculations which I then use to create another table. So I need to call more than 1000 entities for a given link. For instance lets say there are over a 1000 users rated 'link1' there will be over a 1000 instances of this class for the given link property that I need to call. How would I complete this example? class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() query =LinkRating2.all() link1 = 'link string name' a = query.filter('link = ', link1) aa = a.fetch(1000)##how would i get more than 1000 for a given link1 as shown? ##keybased over 1000 in other post example i need method for a subset though not key class MyModel(db.Expando): @classmethod def count_all(cls): """ Count *all* of the rows (without maxing out at 1000) """ count = 0 query = cls.all().order('__key__') while count % 1000 == 0: current_count = query.count() if current_count == 0: break count += current_count if current_count == 1000: last_key = query.fetch(1, 999)[0].key() query = query.filter('__key__ > ', last_key) return count

    Read the article

1