Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 141/563 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • How should I pitch moving to an agile/iterative development cycle with mandated 3-week deployments?

    - by Wayne M
    I'm part of a small team of four, and I'm the unofficial team lead (I'm lead in all but title, basically). We've largely been a "cowboy" environment, with no architecture or structure and everyone doing their own thing. Previously, our production deployments would be every few months without being on a set schedule, as things were added/removed to the task list of each developer. Recently, our CIO (semi-technical but not really a programmer) decided we will do deployments every three weeks; because of this I instantly thought that adopting an iterative development process (not necessarily full-blown Agile/XP, which would be a huge thing to convince everyone else to do) would go a long way towards helping manage expectations properly so there isn't this far-fetched idea that any new feature will be done in three weeks. IMO the biggest hurdle is that we don't have ANY kind of development approach in place right now (among other things like no CI or automated tests whatsoever). We don't even use Waterfall, we use "Tell Developer X to do a task, expect him to do everything and get it done". Are there any pointers that would help me start to ease us towards an iterative approach and A) Get the other developers on board with it and B) Get management to understand how iterative works? So far my idea involves trying to set up a CI server and get our build process automated (it takes about 10-20 minutes right now to simply build the application to put it on our development server), since pushing tests and/or TDD will be met with a LOT of resistance at this point, and constantly force us to break larger projects into smaller chunks that could be done iteratively in a three-week cycle; my only concern is that, unless I'm misunderstanding, an agile/iterative process may or may not release the software (depending on the project scope you might have "working" software after three weeks, but there isn't enough of it that works to let users make use of it), while I think the expectation here from management is that there will always be something "ready to go" in three weeks, and that disconnect could cause problems. On that note, is there any literature or references that explains the agile/iterative approach from a business standpoint? Everything I've seen only focuses on the developers, how to do it, but nothing seems to describe it from the perspective of actually getting the buy-in from the businesspeople.

    Read the article

  • Are More Comments Better in High-Turnover Environments?

    - by joshin4colours
    I was talking with a colleague today. We work on code for two different projects. In my case, I'm the only person working on my code; in her case, multiple people work on the same codebase, including co-op students who come and go fairly regularly (between every 8-12 months). She said that she is liberal with her comments, putting them all over the place. Her reasoning is that it helps her remember where things are and what things do since much of the code wasn't written by her and could be changed by someone other than her. Meanwhile, I try to minimize the comments in my code, putting them in only in places with a unobvious workaround or bug. However, I have a better understanding of my code overall, and have more direct control over it. My opinion in that comments should be minimal and the code should tell most of the story, but her reasoning makes sense too. Are there any flaws in her reasoning? It may clutter the code but it ultimately could be quite helpful if there are many people working on it in the short- to medium-run.

    Read the article

  • Successful technical communities except for open-source?

    - by Joshua Fox
    Have you ever seen a successful technical community -- e.g. user group, industry organization? Am I asking about a group of software engineers who get together F2F (or maybe online) and discuss technical and industry issues with deep zeal and interest -- a place where meaningful connections are made. Here are the only examples I have ever seen: Open source Maybe the Silicon Valley Java Users' Group Homebrew Computing Club in the '70's This sort of thing does exist in academia. Of course, there are lots of conferences and attempts at user's groups. However, almost all committed, serious software engineers, when asked about this, say "I don't have the time", which means that the organizations are not worthwhile to the best in our profession. Has anyone seen any organizations with an ongoing spirit of enthusiasm from top software engineers?

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Server Side Developer Prerequisites

    - by Jking
    I am new to server side development and am currently learning node.js. What sort of networking information should I be familiar with to allow for a smooth learning curve with server side development. Could anyone provide resources pertaining to the information required to get into server programming? To give you a better idea of my standpoint: I do not know how a server interacts with a database [Q: How does a NoSQL database, or database in general, communicate with a server?] I am unsure of how a web stack works [Q: I have heard of LAMP but do not know how Apache, MySQL, and PHP interact. Hopefully this applies to other stacks as well. How do the components of a stack work together? Also, is a MEAN stack an alternative, or is it completely irrelevant to this] I have trivial knowledge of internet protocol [however extremely inefficient][Q: What resources are beneficial when learning about networking, and how much/what knowledge should I acquire to program on the server side] I am unsure of what I am unsure of concerning networking information necessary to start development Information on how the client-server model works would be greatly appreciated

    Read the article

  • Why was Objective-C popularity so sudden on TIOBE index?

    - by l46kok
    I'd like to ask a question that is pretty similar to the one being asked here, but for Objective-C. According to TIOBE rankings, the rise of popularity of Objective-C is unprecedented. This is obviously tied to the popularity of Apple products, but I feel like this might be a hasty conclusion to make since it doesn't really explain the stagnant growth of Java (1. There are way more Android O/S devices distributed worldwide, 2. Java is used in virtually every platform one can imagine) Now I haven't programmed in Objective-C at all, but I'd like to ask if there are any unique features or advantages about the language itself compared to other prevalent languages such as C++, Java, C#, Python etc. What are some other factors that contributed into the rise of Objective-C in this short span of time?

    Read the article

  • Environment naming standards in software development?

    - by Marcus_33
    My project is currently suffering from environment naming issues. Different people have different assumptions as to what environments should be named or what the names designate, and it's causing confusion when discussing them. I've done a bit of research and I haven't found any standards out there. The terms include "Local", "Sand", "Dev", "Test", "User", "QA", "Staging" and "Prod" (plus a few more that different people have asked about) I'm not looking for just opinions, though if there's one out there that "everyone" has I'll take it - I'm trying to find definitions advanced by some sort of authority, even if it's unofficial. Here's the environments we currently use: Environment on the developer's PC Shared Environment where developers directly upload code to self-test Shared Environment where standards and functionality are tested by QA people Shared Environment where completed and QA-checked code is approved by project requesters Environment that mirrors the final environment as a final check and to prepare for deployment Final Environment where code is in use I know what I'd call them, but is there some sort of standard on this? Thanks in advance.

    Read the article

  • How can we distribute a client app to other non-US businesses?

    - by Simon
    I'm working on an app which acts as a client for our web service. We sell this service to businesses, and we want to distribute the app to their employees for free. The app will be customised for each client. If we were in the US, my understanding is that we'd ask them to enrol in the volume purchasing program, and submit a version of our app for each business, for enterprise distribution at the free price point However, the businesses aren't all in the US, so they can't enrol in the VPP. They have thousands of employees, so promo codes won't be sufficient. What are our alternatives?

    Read the article

  • Is there a best practice / standard approach to a free trial for a web app

    - by wobbily_col
    I have an idea for a web app, and would be interested in implementing it, and offering a free trial of say 5 uses before asking people to sign up. I can think of numerous ways of doing this (using cookies , logging IP adresses off the top of my head, limiting functionality). Is there a standard approach to this? Are there best practices? Are there any good tutorials on this? (I would prefer not to go the liited functionality route, as it will not show what the app is capable of).

    Read the article

  • Difference between a pseudo code and algorithm?

    - by Vamsi Emani
    Technically, Is there a difference between these two words or can we use them interchangeably? Both of them more or less describe the logical sequence of steps that follow in solving a problem. ain't it? SO why do we actually use two such words if they are meant to talk of the same? Or, In case if they aren't synonymous words, What is it that differentiates them? In what contexts are we supposed to use the word pseudo code vs the word algorithm? Thanks.

    Read the article

  • When are Getters and Setters Justified

    - by Winston Ewert
    Getters and setters are often criticized as being not proper OO. On the other hand most OO code I've seen has extensive getters and setters. When are getters and setters justified? Do you try to avoid using them? Are they overused in general? If your favorite language has properties (mine does) then such things are also considered getters and setters for this question. They are same thing from an OO methodology perspective. They just have nicer syntax. Sources for Getter/Setter Criticism (some taken from comments to give them better visibility): http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html http://typicalprogrammer.com/?p=23 http://c2.com/cgi/wiki?AccessorsAreEvil http://www.darronschall.com/weblog/2005/03/no-brain-getter-and-setters.cfm http://www.adam-bien.com/roller/abien/entry/encapsulation_violation_with_getters_and To state the criticism simply: Getters and Setters allow you to manipulate the internal state of objects from outside of the object. This violates encapsulation. Only the object itself should care about its internal state. And an example Procedural version of code. struct Fridge { int cheese; } void go_shopping(Fridge fridge) { fridge.cheese += 5; } Mutator version of code: class Fridge { int cheese; void set_cheese(int _cheese) { cheese = _cheese; } int get_cheese() { return cheese; } } void go_shopping(Fridge fridge) { fridge.set_cheese(fridge.get_cheese() + 5); } The getters and setters made the code much more complicated without affording proper encapsulation. Because the internal state is accessible to other objects we don't gain a whole lot by adding these getters and setters. The question has been previously discussed on Stack Overflow: http://stackoverflow.com/questions/565095/java-are-getters-and-setters-evil http://stackoverflow.com/questions/996179

    Read the article

  • Is shipping a Clojure desktop app realistic?

    - by Cedric Martin
    I'm currently shipping a desktop Java application. It is a plain old Java 5 Java / Swing app and so far everything worked nicely. Java 5 was targetted because some users were on OS X version / computers that shall never have Java 6 (we may lift this limitation soon and switch to a newer Java and simply abandoning my users stuck with Java 5). I'm quickly getting up to speed with Clojure but I haven't really done lots of Clojure-to-Java and Java-to-Clojure yet and I was wondering if it was realistic to ship a Clojure desktop application instead of a Java application? The application I'm shipping is currently about 12 MB with all the .jar so adding Clojure doesn't seen to be too much of an issue. My plan would be to have Clojure call Java APIs: my application is already divided in several independent jars. If I understand correctly calling Clojure from Java is harder than calling Java code from Clojure which is why I'd basically rewrite all the UI (part of the UI, mixing Swing components and self-made BufferedImages needs to be rewritten anyway due to the rise of retina display), and do all the 'wiring' from Clojure. So that's the problem I'm facing: is it realistic to ship a Clojure desktop app? (it certainly doesn't seem to be very widespread but then shipping plain Java desktop apps ain't that common either and I'm doing it anyway) Technically, what would need to be done? (compared to shipping a Java app)

    Read the article

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Should one generally develop a client library for REST services to help prevent API breakages?

    - by BestPractices
    We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer. I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted. Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • C# Interview Preparation - References?

    - by Kanini
    This is a specific question relating to C#. However, it can be extrapolated to other languages too. While one is preparing for an interview of a C# Developer (ASP.NET or WinForms or ), what would be the typical reference material that one should look at? Are there any good books/interview question collections that one should look at so that they can be better prepared? This is just to know the different scenarios. For example, I might be writing SQL Stored Procedures and Queries, but I might stumble when asked suddenly Given an Employee Table with the following column(s). EmployeeId, EmployeeName, ManagerId Write a SQL Query which will get me the Name of Employee and Manager Name? NOTE: I am not asking for a Question Bank so that I can learn by rote what the questions are and reproduce them (which, obviously will NOT work!)

    Read the article

  • What Are Some Advantages/Disadvantages of Using C over Assembly?

    - by Daniel
    I'm currently studying engineering in Telecommunications and Electronics and we have migrated from assembler to C in microprocessor programming. I have doubts that this is a good idea. What are some advantages and disadvantages of C compared to assembly? The advantages/disadvantages I see are: Advantages: I can tell that C syntax is a lot easier to learn than Assembler syntax. C is easier to use for making more complex programs. Learning C is somehow more productive than learning assembler cause there is more developing stuff around C than Assembler. Disadvantages: Assembler is a lower level programming language than C,so this makes it a good for programming directly to hardware. Is a lot more flexible alluding you to work with memory,interrupts,micro-registers,etc.

    Read the article

  • How can I become a better Javascript programmer?

    - by Elliot Bonneville
    I've been programming with Javascript for a few years now, and I guess I'm okay at it. I can solve pretty much any problem I come across, and while my solutions may not be that great, they work. However, I want to become a better Javascript programmer. I'd like to learn all the best-practices, tricks of the trade, things to avoid, and anything else I should know so that my code will be 100% optimized and as readable as possible. How do I do that? I realize this question has been loosely asked before here, but the OP was something of beginner. I want to get into the more advanced side of Javascript programming with this question. Is this possible or am I just being way too impatient? Do I just need to spend loads of time programming?

    Read the article

  • How is time calculation performed by a computer?

    - by Jorge Mendoza
    I need to add a certain feature to a module in a given project regarding time calculation. For this specific case I'm using Java and reading through the documentation of the Date class I found out the time is calculated in milliseconds starting from January 1, 1970, 00:00:00 GMT. I think it's safe to assume there is a similar "starting date" in other languages so I guess the specific implementation in Java doesn't matter. How is the time calculation performed by the computer? How does it know exactly how many milliseconds have passed from that given "starting date and time" to the current date and time?

    Read the article

  • Should I use Ruby version 1.8.7 or 1.9.2 to start developing Rails apps?

    - by BeachRunnerJoe
    Hello. I'm diving into RoR and I see that the current version of Rails (3.0.5) works with both 1.8.7 and 1.9.2. Currently, I have both versions of Ruby installed using RVM, but I'm wondering which version I should be using as I dive into Rails and start developing apps. I suppose I'd prefer to use the newest version (1.9.2), but I don't know the technologies well enough to know pros/cons of using either. Thanks so much!

    Read the article

  • Returning null vs Throwing exceptions

    - by Svish
    Is in a bit of disagreement with a more experienced developer on this issue, and was wondering what you guys here think about this. Environment is Java, EJB 3, services, etc. The code I wrote calls a service to get things and to create things. Problem was that I got null pointer exceptions in places that didn't make sense. For example when I asked the service to create an object, I got null back. And when I tried to look up an object with an id I knew existed, I still got null back. Was like it was ignoring me. Spent some time trying to figure out what was wrong in my code (since I'm less experienced I usually assume I have messed up). Turns out the reason was security. If the user principal using my service didn't have the right permissions to use the service I called from my service, then that service simply returned null. The services that are here already are usually not documented either, so this is just something you have to know... somehow... So here is the thing: I mean that this is rather confusing as a developer interacting with this service. To me it would make much more sense if that service thew an exception which would tell me that hey, you don't have the proper permissions to get info about this thing or to create this new thing. I would then immediately know why my service wasn't working as expected. However, he argued that asking is not wrong. Exceptions should only be thrown when there is an error and asking for a thing is not an error. Even if you don't have permission to "see" that the thing you asked for. The things are often looked up in a GUI by users and for those users not having the right permissions, these things simply "do not exist". So, in short: Asking is not wrong, hence no exception. Get methods return null because to those users those things "doesn't exist". Create methods return null because nothing was created, since the user wasn't allowed to create anything. So, what do you guys think? Is this normal and/or good practice? I prefer exceptions as I prefer throwing and catching exceptions because I find it much easier to know what's going on. So I would for example also prefer to throw a NotFoundException if you asked for an id which didn't exist, rather than returning null. Anyways, just curious to what others think about this as I'm not the most experienced developer yet.

    Read the article

  • Is it bad to have an "Obsessive Refactoring Disorder"?

    - by Rachel
    I was reading this question and realized that could almost be me. I am fairly OCD about refactoring someone else's code when I see that I can improve it. For example, if the code contains duplicate methods to do the same thing with nothing more than a single parameter changing, I feel I have to remove all the copy/paste methods and replace it with one generic one. Is this bad? Should I try and stop? I try not to refactor unless I can actually make improvements to the code performance or readability, or if the person who did the code isn't following our standard naming conventions (I hate expecting a variable to be local because of the naming standard, only to discover it is a global variable which has been incorrectly named)

    Read the article

  • Backward compatibility with event-sourcing

    - by Tomas Jansson
    How do you stay backward compatible with event-sourcing? Let say you release a version that has one kind of event, let call it X. You know how to handle that event in all the systems that extracts the events from the event source. In a later release you make a change to event X or delete it, how do you stay backward compatible with that? To have a fully functional system you need to be able to handle the old event as the same time as you need to handle the updated version. Or if you delete that event type, then you will be stuck with code that is only there to handle legacy events which in my head can be a little bit messy in the long run.

    Read the article

  • Purchasing Visual Studio 2010 Ultimate and Professional version

    - by Don
    We are a small team with 5-7 developers. We are planning to purchase Visual Studio 2010, better with one or two Ultimate version, others with professional version. The suggestion from Microsoft is getting it from retail. We find we can get them from http://msdn.microsoft.com/en-us/subscriptions/buy.aspx or http://www.amazon.com/Visual-Studio-2010-Ultimate-MSDN/dp/B0038KNER0/ref=sr_1_fkmr3_2?ie=UTF8&qid=1296675635&sr=8-2-fkmr3. From Amazon, it will be lower cost. We wonder if we buy from Microsft directly we can get additional benefits like supports, which other retailers can not provide. Anyone has any ideas? What is the cost effient way? Thanks,

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >