Search Results

Search found 17845 results on 714 pages for 'python social auth'.

Page 408/714 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Writing a post search algorithm.

    - by MdaG
    I'm trying to write a free text search algorithm for finding specific posts on a wall (similar kind of wall as Facebook uses). A user is suppose to be able to write some words in a search field and get hits on posts that contain the words; with the best match on top and then other posts in decreasing order according to match score. I'm using the edit distance (Levenshtein) "e(x, y) = e" to calculate the score for each post when compared to the query word "x" and post word "y" according to: score(x, y) = 2^(2 - e)(1 - min(e, |x|) / |x|) Each word in a post contributes to the total score for that specific post. This approach seems to work well when the posts are of roughly the same size, but sometime certain large posts manages to rack up score solely on having a lot of words in them while in practice not being relevant to the query. Am I approaching this problem in the wrong way or is there some way to normalize the score that I haven't thought of?

    Read the article

  • How do I relate two models/tables in Django based on non primary non unique keys?

    - by wizard
    I've got two tables that I need to relate on a single field po_num. The data is imported from another source so while I have a little bit of control over what the tables look like but I can't change them too much. What I want to do is relate these two models so I can look up one from the other based on the po_num fields. What I really need to do is join the two tables so I can do a where on a count of the related table. I would like to do filter for all Order objects that have 0 related EDI856 objects. I tried adding a foreign key to the Order model and specified the db_column and to_fields as po_num but django didn't like that the fact that Edi856.po_num wasn't unique. Here are the important fields of my current models that let me display but not filter for the data that I want. class Edi856(models.Model): po_num = models.CharField(max_length=90, db_index=True ) class Order(models.Model): po_num = models.CharField(max_length=90, db_index=True) def in_edi(self): '''Has the edi been processed?''' return Edi856.objects.filter(po_num = self.po_num).count() Thanks for taking the time to read about my problem. I'm not sure what to do from here.

    Read the article

  • Reading path in templates

    - by DJPython
    Hello, is it any way to read path to current page? For example, I am at www.example.com/foo/bar/ - and I want to read '/foo/bar/'. But, all have to be done in template file without modyficating views. I have to many view files to edit each one. Sorry for my english, hope everyone understand. Cheers.

    Read the article

  • Reverse mapping from a table to a model in SQLAlchemy

    - by Jace
    To provide an activity log in my SQLAlchemy-based app, I have a model like this: class ActivityLog(Base): __tablename__ = 'activitylog' id = Column(Integer, primary_key=True) activity_by_id = Column(Integer, ForeignKey('users.id'), nullable=False) activity_by = relation(User, primaryjoin=activity_by_id == User.id) activity_at = Column(DateTime, default=datetime.utcnow, nullable=False) activity_type = Column(SmallInteger, nullable=False) target_table = Column(Unicode(20), nullable=False) target_id = Column(Integer, nullable=False) target_title = Column(Unicode(255), nullable=False) The log contains entries for multiple tables, so I can't use ForeignKey relations. Log entries are made like this: doc = Document(name=u'mydoc', title=u'My Test Document', created_by=user, edited_by=user) session.add(doc) session.flush() # See note below log = ActivityLog(activity_by=user, activity_type=ACTIVITY_ADD, target_table=Document.__table__.name, target_id=doc.id, target_title=doc.title) session.add(log) This leaves me with three problems: I have to flush the session before my doc object gets an id. If I had used a ForeignKey column and a relation mapper, I could have simply called ActivityLog(target=doc) and let SQLAlchemy do the work. Is there any way to work around needing to flush by hand? The target_table parameter is too verbose. I suppose I could solve this with a target property setter in ActivityLog that automatically retrieves the table name and id from a given instance. Biggest of all, I'm not sure how to retrieve a model instance from the database. Given an ActivityLog instance log, calling self.session.query(log.target_table).get(log.target_id) does not work, as query() expects a model as parameter. One workaround appears to be to use polymorphism and derive all my models from a base model which ActivityLog recognises. Something like this: class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) title = Column(Unicode(255), nullable=False) edited_at = Column(DateTime, onupdate=datetime.utcnow, nullable=False) entity_type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': entity_type} class Document(Entity): __tablename__ = 'documents' __mapper_args__ = {'polymorphic_identity': 'document'} body = Column(UnicodeText, nullable=False) class ActivityLog(Base): __tablename__ = 'activitylog' id = Column(Integer, primary_key=True) ... target_id = Column(Integer, ForeignKey('entities.id'), nullable=False) target = relation(Entity) If I do this, ActivityLog(...).target will give me a Document instance when it refers to a Document, but I'm not sure it's worth the overhead of having two tables for everything. Should I go ahead and do it this way?

    Read the article

  • Django: Applying Calculations To A Query Set

    - by TheLizardKing
    I have a QuerySet that I wish to pass to a generic view for pagination: links = Link.objects.annotate(votes=Count('vote')).order_by('-created')[:300] This is my "hot" page which lists my 300 latest submissions (10 pages of 30 links each). I want to now sort this QuerySet by an algorithm that HackerNews uses: (p - 1) / (t + 2)^1.5 p = votes minus submitter's initial vote t = age of submission in hours Now because applying this algorithm over the entire database would be pretty costly I am content with just the last 300 submissions. My site is unlikely to be the next digg/reddit so while scalability is a plus it is required. My question is now how do I iterate over my QuerySet and sort it by the above algorithm? For more information, here are my applicable models: class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Notes: I don't have "downvotes" so just the presence of a Vote row is an indicator of a vote or a particular link by a particular user.

    Read the article

  • How to use R-Tree for plotting large number of map markers on google maps

    - by Eeyore
    After searching SO and multiple articles I haven't found a solution to my problem. What I am trying to achieve is to load 20,000 markers on Google Maps. R-Tree seems like a good approach but it's only helpful when searching for points within the visible part of the map. When the map is zoomed out it will return all of the points and...crash the browser. There is also the problem with dragging the map and at the end of dragging re-running the query. I would like to know how I can use R-Tree and be able to achieve the all of the above.

    Read the article

  • converting a treebank of vertical trees to s-expressions

    - by Andreas
    I need to preprocess a treebank corpus of sentences with parse trees. The input format is a vertical representation of trees, like so: S =NP ==(DT +def) the == (N +ani) man =VP ==V walks ...and I need it like: (S (NP (DT the) (N man)) (VP (V walks))) I have code that almost does it, but not quite. There's always a missing paren somewhere. Should I use a proper parser, maybe a CFG? The current code is at http://github.com/andreasvc/eodop/blob/master/arbobanko.py The code also contains real examples from the treebank.

    Read the article

  • Accessing CSR extension stack in M2Crypto

    - by Charles Duffy
    Howdy! I have a certificate signing request with an extension stack added. When building a certificate based on this request, I would like to be able to access that stack to use in creating the final certificate. However, while M2Crypto.X509.X509 has a number of helpers for accessing extensions (get_ext, get_ext_at and the like), M2Crypto.X509.Request appears to provide only a member for adding extensions, but no way to inspect the extensions already associated with a given object. Am I missing something here?

    Read the article

  • Inlines in Django Admin

    - by Oli
    I have two models, Order and UserProfile. Each Order has a ForeignKey to UserProfile, to associate it with that user. On the django admin page for each Order, I'd like to display the UserProfile associated with it, for easy processing of information. I have tried inlines: class UserInline(admin.TabularInline): model = UserProfile class ValuationRequestAdmin(admin.ModelAdmin): list_display = ('address1', 'address2', 'town', 'date_added') list_filter = ('town', 'date_added') ordering = ('-date_updated',) inlines = [ UserInline, ] But it complains that UserProfile "has no ForeignKey to" Order - which it doesn't, it's the other way around. Is there a way to do what I want?

    Read the article

  • how to model a follower stream in appengine?

    - by molicule
    I am trying to design tables to buildout a follower relationship. Say I have a stream of 140char records that have user, hashtag and other text. Users follow other users, and can also follow hashtags. I am outlining the way I've designed this below, but there are two limitaions in my design. I was wondering if others had smarter ways to accomplish the same goal. The issues with this are The list of followers is copied in for each record If a new follower is added or one removed, 'all' the records have to be updated. The code class HashtagFollowers(db.Model): """ This table contains the followers for each hashtag """ hashtag = db.StringProperty() followers = db.StringListProperty() class UserFollowers(db.Model): """ This table contains the followers for each user """ username = db.StringProperty() followers = db.StringListProperty() class stream(db.Model): """ This table contains the data stream """ username = db.StringProperty() hashtag = db.StringProperty() text = db.TextProperty() def save(self): """ On each save all the followers for each hashtag and user are added into a another table with this record as the parent """ super(stream, self).save() hfs = HashtagFollowers.all().filter("hashtag =", self.hashtag).fetch(10) for hf in hfs: sh = streamHashtags(parent=self, followers=hf.followers) sh.save() ufs = UserFollowers.all().filter("username =", self.username).fetch(10) for uf in ufs: uh = streamUsers(parent=self, followers=uf.followers) uh.save() class streamHashtags(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() class streamUsers(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() Now, to get the stream of followed hastags indexes = db.GqlQuery("""SELECT __key__ from streamHashtags where followers = 'myusername'""") keys = [k,parent() for k in indexes[offset:numresults]] return db.get(keys) Is there a smarter way to do this?

    Read the article

  • Which is faster??

    - by kaki
    is opening a large file once reading it completely once to list faster (or) opening smaller files whose total sum of size is equal to large file and loading smaller file into list manupalating one by one faster? which is faster?? is the difference is time large enough to impact my program?? total time difference of lesser then of 30 sec is negligible for me

    Read the article

  • How to create a HTML world map with GeoDjango ?

    - by pierre-guillaume-degans
    The GeoDjango tutorial explains how to insert world borders into a spatial database. I would like to create a world Map in HTML with these data, with both map and area tags. Something like that. I just don't know how to retrieve the coordinates for each country (required for the area's coords attribute). from world.models import WorldBorders for country in WorldBorders.objects.all(): print u'<area shape="poly" title="%s" alt="%s" coords="%s" />' % (v.name, v.name, "???") Thanks !

    Read the article

  • Assigning a material in Blender with a script

    - by Narcolapser
    Question: How do you assign a material with a script to an object in blender? Info: I have this script to import a proprietary model type of mine that is basically a star map with object consisting of a single vertex. in order to make them look like stars and be visible they are all going to have a halo material assigned to them. I'm figuring out how to make this material and give it the values just fine, but I can't seem to get it to assign. I tried the most obvious thing which was: objectName.setMaterial(materialName) but that did nothing. and when i would take an object that had a material and call the getMaterial function on it, it would return nothing. there is something I'm missing here, can some one shed some light on it? Thanks. ~TA

    Read the article

  • Images to video - converting to IplImage makes video blue

    - by user891908
    I want to create a video from images using opencv. The strange problem is that if i will write image (bmp) to disk and then load (cv.LoadImage) it it renders fine, but when i try to load image from StringIO and convert it to IplImage, it turns video to blue. Heres the code: import StringIO output = StringIO.StringIO() FOREGROUND = (0, 0, 0) TEXT = 'MY TEXT' font_path = 'arial.ttf' font = ImageFont.truetype(font_path, 18, encoding='unic') text = TEXT.decode('utf-8') (width, height) = font.getsize(text) # Create with background with place for text w,h=(600,600) contentimage=Image.open('0.jpg') background=Image.open('background.bmp') x, y = contentimage.size # put content onto background background.paste(contentimage,(((w-x)/2),0)) draw = ImageDraw.Draw(background) draw.text((0,0), text, font=font, fill=FOREGROUND) pi = background pi.save(output, "bmp") #pi.show() #shows image in full color output.seek(0) pi= Image.open(output) print pi, pi.format, "%dx%d" % pi.size, pi.mode cv_im = cv.CreateImageHeader(pi.size, cv.IPL_DEPTH_8U, 3) cv.SetData(cv_im, pi.tostring()) print pi.size, cv.GetSize(cv_im) w = cv.CreateVideoWriter("2.avi", cv.CV_FOURCC('M','J','P','G'), 1,(cv.GetSize(cv_im)[0],cv.GetSize(cv_im)[1]), is_color=1) for i in range(1,5): cv.WriteFrame(w, cv_im) del w

    Read the article

  • how to speed up the code??

    - by kaushik
    i have very huge code about 600 lines plus. cant post the whole thing here. but a particular code snippet is taking so much time,leading to problems. here i post that part of code please tell me what to do speed up the processing.. please suggest the part which may be the reason and measure to improve them if this small part of code is understandable. using_data={} def join_cost(a , b): global using_data #print a #print b save_a=[] save_b=[] print 1 #for i in range(len(m)): #if str(m[i][0])==str(a): save_a=database_index[a] #for i in range(len(m)): # if str(m[i][0])==str(b): #print 'save_a',save_a #print 'save_b',save_b print 2 save_b=database_index[b] using_data[save_a[0]]=save_a s=str(save_a[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') print 3 for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) end_time=save_a[4] #print end_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 q=[] print 4 l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') q=k3.split(' ') #print q print 5 s=str(save_b[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) strt_time=save_b[3] #print strt_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(strt_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 w=[] l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') w=k3.split(' ') #print w cost=0 for i in range(12): #print q[i] #print w[i] h=float(q[i])-float(w[i]) cost=cost+math.pow(h,2) j_cost=math.sqrt(cost) #print cost return j_cost def target_cost(a , b): a=(b+1)*3 b=(a+1)*2 t_cost=(a+b)*5/2 return t_cost r1='shht:ra_77' r2='grx_18' g=[] nodes=[] nodes=nodes+[[r1]] for i in range(len(y_in_db_format)): g=y_in_db_format[i] #print g #print g[0] g.remove(str(g[0])) nodes=nodes+[g] nodes=nodes+[[r2]] print nodes print "lenght of nodes",len(nodes) lists=[] #lists=lists+[r1] for i in range(len(nodes)): for j in range(len(nodes[i])): lists=lists+[nodes[i][j]] #lists=lists+[r2] print lists distance={} for i in range(len(lists)): if i==0: distance[str(lists[i])]=0 else: distance[str(lists[i])]=long(123231223) #print distance group_dist=[] infinity=long(123232323) for i in range(len(nodes)): distances=[] for j in range(len(nodes[i])): #distances=[] if i==0: distances=distances+[[nodes[i][j], 0]] else: distances=distances+[[nodes[i][j],infinity]] group_dist=group_dist+[distances] #print distances print "group_distances",group_dist #print "check",group_dist[0][0][1] #costs={} #for i in range(len(lists)): #if i==0: # costs[str(lists[i])]=1 #else: # costs[str(lists[i])]=get_selfcost(lists[i]) path=[] for i in range(len(nodes)): mini=[] if i!=(len(nodes)-1): #temp=long(123234324) #Now calculate the cost between the current node and each of its neighbour for k in range(len(nodes[(i+1)])): for j in range(len(nodes[i])): current=nodes[i][j] #print "current_node",current j_distance=join_cost( current , nodes[i+1][k]) #t_distance=target_cost( current , nodes[i+1][k]) t_distance=34 #print distance #print "distance between current and neighbours",distance total_distance=(.5*(float(group_dist[i][j][1])+float(j_distance))+.5*(float(t_distance))) #print "total distance between the intial_nodes and current neighbour",total_distance if int(group_dist[i+1][k][1]) > int(total_distance): group_dist[i+1][k][1]=total_distance #print "updated distance",group_dist[i+1][k][1] a=current #print "the neighbour",nodes[i+1][k],"updated the value",a mini=mini+[[str(nodes[i+1][k]),a]] print mini

    Read the article

  • how to speed up the code??

    - by kaushik
    in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation. I approximatily need to call this method about 10,000 times.which is making my program very slow? any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list? I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question... thanks in advance

    Read the article

  • remove border of a gtk.button

    - by spanctus
    Hi, I want to remove the border of the gtk.button, but i Don't know how to do it. I tried with : button = gtk.Button() button.set_style("inner-border",0) but i have an error : the property doesn't exist. I tried too to create a new gtk.Style and use it for the button, but same error. Anyone has an idea ? Thanks

    Read the article

  • build an API service in Django

    - by Peter
    Hi all, I want to build an API service using Django. A basic workflow goes like this: First, an http request goes to http://mycompany.com/create.py?id=001&callback=http://callback.com. It will create a folder on the server with name 001. Second, if the folder does not exist, it will be created. You get response immediately in XML format. It will look like: <?xml version="1.0" encoding="UTF-8"?> <response> <status> <statusCode>0</statusCode> <message>Success</message> </status> <group id="001"/> </response> Finally, the server will do its job (i.e. creating the folder). After it is done, the server does a callback to the URL provided. Currently, I use return render_to_response('create.xml', {'statusCode': statusCode, 'statusMessage': statusMessage, 'groupId': groupId, }, mimetype = 'text/xml') to send the XML response back. I have an XML template which has statusCode, statusMessage, groupId placeholders. <?xml version="1.0" encoding="UTF-8"?> <response> <status> <statusCode>{{ statusCode }}</statusCode> <message>{{ statusMessage }}</message> </status> {% if not statusCode %} <group id="{{ groupId }}"/> {% endif %} </response> But in this way I have to put step 3 before step 2, because otherwise step 3 will not be executed if it is after return statement. Can somebody give me some suggestions how to do this? Thanks.

    Read the article

  • methods of metaclasses on class instances.

    - by Stefano Borini
    I was wondering what happens to methods declared on a metaclass. I expected that if you declare a method on a metaclass, it will end up being a classmethod, however, the behavior is different. Example >>> class A(object): ... @classmethod ... def foo(cls): ... print "foo" ... >>> a=A() >>> a.foo() foo >>> A.foo() foo However, if I try to define a metaclass and give it a method foo, it seems to work the same for the class, not for the instance. >>> class Meta(type): ... def foo(self): ... print "foo" ... >>> class A(object): ... __metaclass__=Meta ... def __init__(self): ... print "hello" ... >>> >>> a=A() hello >>> A.foo() foo >>> a.foo() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'A' object has no attribute 'foo' What's going on here exactly ? edit: bumping the question

    Read the article

  • Does this introduce security vulnerabilities?

    - by mcmt
    I don't think I'm missing anything. Then again I'm kind of a newbie. def GET(self, filename): name = urllib.unquote(filename) full = path.abspath(path.join(STATIC_PATH, filename)) #Make sure request is not tricksy and tries to get out of #the directory, e.g. filename = "../.ssh/id_rsa". GET OUTTA HERE assert full[:len(STATIC_PATH)] == STATIC_PATH, "bad path" return open(full).read()

    Read the article

  • How can I load a sql "dump" file into sql alchemy

    - by JudoWill
    I have a large sql dump file ... with multiple CREATE TABLE and INSERT INTO statements. Is there any way to load these all into a SQLAlchemy sqlite database at once. I plan to use the introspected ORM from sqlsoup after I've created the tables. However, when I use the engine.execute() method it complains: sqlite3.Warning: You can only execute one statement at a time. Is there a way to work around this issue. Perhaps splitting the file with a regexp or some kind of parser, but I don't know enough SQL to get all of the cases for the regexp. Any help would be greatly appreciated. Will EDIT: Since this seems important ... The dump file was created with a MySQL database and so it has quite a few commands/syntax that sqlite3 does not understand correctly.

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >