Search Results

Search found 4918 results on 197 pages for 'architecture'.

Page 32/197 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • when to introduce an application services tier in an n-tier application

    - by user20358
    I am developing a web based application whose primary objective is to fetch data from the database, display it on the UI, take in user inputs and write them back to the database. The application is not going to be doing any industrial strength algorithm crunching, but will be receiving a very high number of hits at peak times (described below) which will be changing thru the day. The layers are your typical Presentation, Business, Data. The data layer is taken care of by the database server. The business layer will contain the DAL component to access the database server over tcp. The choices I have to separate these layers into tiers are: The presentation and business layers can be either kept on the same tier. The presentation layer on a separate tier by itself and the business layer on a separate tier by itself. In the case of choice 2, the business layer will be accessed by the presentation layer using a WCF service either over http or tcp. I don't see any heavy processing being done on the Business layer, so I am leaning towards option 1 above. I also feel for the same reason, adding a new tier will only introduce the network latency. However, in terms of scalability in case I need to scale up or scale out, which is a better way to go? This application will need to be able to support up to 6 million users an hour. There will be a reasonable amount of data in each user session, storing user's preferences and other details. I will be using page level caching as well.

    Read the article

  • Rails/Node.js interaction

    - by lpvn
    I and my co-worker are developing a web application with rails and node.js and we can't reach a consensus regarding a particular architectural decision. Our setup is basically a rails server working with node.js and redis, when a client makes a http request to our rails API in some cases our rails application posts the response to a redis database and then node.js transmits the response via websocket. Our disagreement occurs in the following point: my co-worker thinks that using node.js to send data to clients is somewhat business logic and should be inside the model, so in the first code he wrote he used commands of broadcast in callbacks and other places of the model, he's convinced that the models are the best place for the interaction between rails and node. I on the other hand think that using node.js belongs to the runtime realm, my take is that the broadcast commands and other node.js interactions should be in the controller and should only be used in a model if passed through a well defined interface, just like the situation when a model needs to access the current user of a session. At this point we're tired of arguing over this same thing and our discussion consists in us repeating to ourselves our same opinions over and over. Could anyone, preferably with experience in the same setup, give us an unambiguous response saying which solution is more adequate and why it is?

    Read the article

  • Tellago keeps hiring

    - by gsusx
    Tellago keeps growing and hiring very aggressively. We were recently received the American Business Award to the best company in the United States, under a 100 people, in the computer services industry ( More details about that in a future post J ) We are currently looking for architects to join our SOA and SharePoint practices. If you are a brilliant developer or architect with expertise on technologies such as WCF, WF or BizTalk Server, you are passionate about technologies and crazy enough to...(read more)

    Read the article

  • Easy remote communication without WCF

    - by Ralf Westphal
    If you´ve read my previous posts about why I deem WCF more of a problem than a solution and how I think we should switch to asynchronous only communication in distributed application, you might be wondering, how this could be done in an easy way. Since a truely simple example to get started with WCF still is drawing quite some traffic to this blog, let me pick up on that and show you, how to accomplish the same but much easier with an async communication API. For simplicities sake let me put all...(read more)

    Read the article

  • Should database-models (conceptual or physical) be reviewed by DBAs?

    - by user61852
    Where I work, new applications that are being developed that will use their own relational database, must have their database-models (conceptual, then physical ) reviewed and aproved by DBAs. Things looked after are normalization, antipatterns, table and column naming standards, etc. Is this really a DBA's responsability to do this ? or should it be, in a greater extend, the responsability of app designers and architects ?

    Read the article

  • Global keyboard states

    - by Petr Abdulin
    I have following idea about processing keyboard input. We capture input in "main" Game class like this: protected override void Update(GameTime gameTime) { this.CurrentKeyboardState = Keyboard.GetState(); // main :Game class logic here base.Update(gameTime); this.PreviousKeyboardState = this.CurrentKeyboardState; } then, reuse keyboard states (which have internal scope) in all other game components. The reasons behind this are 1) minimize keyboard processing load, and 2) reduce "pollution" of all other classes with similar keyboard state variables. Since I'm quite a noob in both game and XNA development, I would like to know if all of this sounds reasonable.

    Read the article

  • Software Design for Product Verticals and Service Verticals

    - by Rachel
    In every industry there are two verticals Product Vertical and Service Vertical, so my question is: How does design approach changes while designing Software for Product Vertical as compared to developing Software for Service Vertical ? What are the pros and cons for each case ? Also, in case of Product Vertical, How you go about designing Product or Features and what are steps involved ? Lastly, I was reading How Facebook Ships Code article and it appears that Product Managers have very little influence on how Product is developed and responsibility lies mainly with the Developer for the feature. So is this good practice and why one would go for this approach ? What would be your comment on this kind of approach ?

    Read the article

  • Even EA's Have Bad Days - it's Time to Reset

    - by Pat Shepherd
    I saw this article and thought I'd share it because, even we EA's have bad days and the 7 points listed are a great way for you to hit the "reset" button. From Geoffrey James on INC.COM, here are 7 ways to change your view of things when, say, you are hitting a frustration point coordinating stakeholders to agree on an approach (never happens, right?) Positive Thinking: 7 Easy Ways to Improve a Bad Day http://www.inc.com/geoffrey-james/positive-thinking-7-easy-ways-to-improve-a-bad-day.html To paraphrase:          You can decide (in an instant) to change patterns of the past          Believe in (or even visualize) good things happening, and they will          Keep a healthy perspective on the work-life / life-life continuum (what things REALLY matter in the big scheme of things)                  Focus on the good (the laws of positive-attraction apply)

    Read the article

  • Why don't more games use vector art?

    - by Parris
    It would seem to me that vector art is more efficient in terms of resources/scalability; however, in most cases I have seen artists using bitmap/rasterized art. Is this a limitation put on the artists by the game programmers/designers? As a programmer I think vector art would be more ideal, since it allows for scaling up resolution without having to recreate the art, creating really large graphics or causing graphics to become blurry. The questions: why aren't more people using SVG/AI to create 2D game art? Would it actually be preferred (and who prefers it)? Are bitmap graphics a standard or a limitation (or maybe neither)? Background: I am working on an engine, and I had some kinda cool ideas for vector based graphics; however, I don't want to piss off artists in the future. I guess this is more a question centered around pragmatism and developing games.

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • Code from my DevConnections Talks and Workshop

    - by dwahlin
    Thanks to everyone who attended my sessions at DevConnections Las Vegas. I had a great time meeting new people, discussing business problems and solutions and interacting. Here’s the code and slides for the sessions.  For those that came to the full-day Silverlight workshop I’ve included the slides that didn’t get printed plus a ton of code to help you get started with various Silverlight topics.   Get Started Building Silverlight Applications Building Architecturally Sound Silverlight Applications Using WCF RIA Services in Silverlight Applications (will post soon) Silverlight Data Integration Options and Usage Scenarios Silverlight Workshop Code

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • Scene Graph Traversing Techniques

    - by Bunkai.Satori
    Scene Graph seems to be the most effective way of representing the game world. The game world usually tends to be as large as the memory and device can handle. In contrast, the screen of the device captures only a fraction of the Game World/Scene Graph. Ideally, I wish to process(update and render) only the visible game objects/nodes on per-frame basis. My question therefore is, how to traverse the scene graph so, that I will focus only on the game notes that are in the camera frustum? How to organize data so that I can easily focus only on the scene graph nodes visible to me? What are techniques to minimize scenegraph traversal time? Is there such way as more effective traversal, or do I have to traverse whole scene graph on per-frame basis?

    Read the article

  • March 2010 Meeting of Israel Dot Net Developers User Group (IDNDUG)

    - by Jackie Goldstein
    Note the special date of this meeting - Wednesday March 24, 2010 For our March 2010 meeting of the Israel Dot Net Developers User Group we have the opportunity for a special meeting with Brad Abrams from Microsoft Corp, who will in Israel for the Developer Academy 4 event. Our user group meeting will be held on Wednesday March 24, 2010 .   This meeting will focus on building Line of Business applications with Silverlight 4, RIA Services and VS2010. Abstract: Building Business Applications...(read more)

    Read the article

  • Using idle time in turn-based (RPG) games for updating

    - by The Communist Duck
    If you take any turn based RPG game there will be large periods of time when nothing is happening because the game is looping over 'wait_for_player_input'. Naturally it seems sensible to use this time to update things. However, this immediately seems to suggest that it would need to be threaded. Is this sort of design possible in a single thread? loop: if not check_something_pressed: update_a_very_small_amount else keep going But if we says 'a_very_small_amount' is only updating a single object each loop, it's going to be very slow at updating. How would you go about this, preferably in a single thread? EDIT: I've tagged this language-agnostic as that seems the sensible thing, though anything more specific to Python would be great. ;-)

    Read the article

  • What are current Biggest Challenges faced by Ecommerce Applications ?

    - by Rachel
    I am in the process to start Product Development for E-commerce and Online Retail domain but before starting I would like to know what are the biggest challenges faced by current state of Art E-Commerce Application ? Also I have not experience building e-commerce products and so what things should I keep in mind before developing one ? Is there are books, articles, blogs outside which I should refer to gain some knowledge before starting out ? Update: What are you thoughts on the recommendation engines for ecommerce applications ? What challenges we have with current state of recommendations engines for ecommerce web application and how can we overcome them ?

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Data transfer between"main" site and secured virtual subsite

    - by Emma Burrows
    I am currently working on a C# ASP.Net 3.5 website I wrote some years ago which consists of a "main" public site, and a sub-site which is our customer management application, using forms-based authentication. The sub-site is set up as a virtual folder in IIS and though it's a subfolder of "main", it functions as a separate web app which handles CRUD access to our customer database and is only accessible by our staff. The main site currently includes a form for new leads to fill in, which generates an email to our sales staff so they can contact them and convince them to become customers. If that process is successful, the staff manually enter the information from the email into the database. Not surprisingly, I now have a new requirement to feed the data from the new lead form directly into the database so staff can just check a box for instance to turn the lead into a customer. My question therefore is how to go about doing this? Possible options I've thought of: Move the new lead form into the customer database subsite (with authentication turned off). Add database handling code to the main site. (No, not seriously considering this duplication of effort! :) Design some mechanism (via REST?) so a webpage outside the customer database subsite can feed data into the customer database I'd welcome some suggestions on how to organise the code for this situation, preferably with extensibility in mind, and particularly if there are any options I haven't thought of. Thanks in advance.

    Read the article

  • IValidatableObject vs Single Responsibility

    - by Boris Yankov
    I like the extnesibility point of MVC, allowing view models to implement IValidatableObject, and add custom validation. I try to keep my Controllers lean, having this code be the only validation logic: if (!ModelState.IsValid) return View(loginViewModel); For example a login view model implements IValidatableObject, gets ILoginValidator object via constructor injection: public interface ILoginValidator { bool UserExists(string email); bool IsLoginValid(string userName, string password); } It seems that Ninject, injecting instances in view models isn't really a common practice, may be even an anti-pattern? Is this a good approach? Is there a better one?

    Read the article

  • DDD: Service or Repository

    - by tikhop
    I am developing an app in DDD manner. And I have a little problem with it. I have a Fare (airline fare) and FareRepository objects. And at some point I should load additional fare information and set this information to existing Fare. I guess that I need to create an Application Service (FareAdditionalInformationService) that will deal with obtaining data from the server and than update existing Fare. However, some people said me that it is necessary to use FareRepository for this problem. I don't know wich place is better for my problem Service or Repository.

    Read the article

  • Is dynamic casting Entities A good design?

    - by Milo
    For my game, Everything inherits from Entity, then other things like Player, PhysicsObject, etc, inherit from Entity. The physics engine sends collision callbacks which has an Entity* to the B that A collided on. Then, lets say A is a Bullet, A tries to cast the entity as a player, if it succeeds, it reduces the player's health. Is this a good design? The problem I have with a message system is that I'd need messages for everything, like: entity.sendMessage(SET_PLAYER_HEALTH,16); So that's why I think casting is cleaner.

    Read the article

  • Why did Alan Kay say, "The Internet was so well done, but the web was by amateurs"?

    - by kalaracey
    OK, so I paraphrased. The full quote: The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. -- Alan Kay. I am trying to understand the history of the Internet and the web, and this statement is hard to understand. I have read elsewhere that the Internet is now used for very different things than it was designed for, and so perhaps that factors in. What makes the Internet so well done, and what makes the web so amateurish? (Of course, Alan Kay is fallible, and no one here is Alan Kay, so we can't know precisely why he said that, but what are some possible explanations?) *See also the original interview*.

    Read the article

  • How to ...set up new Java environment - largely interfaces...

    - by Chris Kimpton
    Hi, Looks like I need to setup a new Java environment for some interfaces we need to build. Say our system is X and we need to interfaces to systems A, B and C. Then we will be writing interfaces X-A, X-B, X-C. Our system has a bus within it, so the publishing on our side will be to the bus and the interface processes will be taking from the bus and mapping to the destination system. Its for a vendor based system - so most of the core code we can't touch. Currently thinking we will have several processes, one per interface we need to do. The question is how to structure things. Several of the APIs we need to work with are Java based. We could go EJB, but prefer to keep it simple, one process per interface, so that we can restart them individually. Similarly SOA seems overkill, although I am probably mixing my thoughts about implementations of it compared to the concepts behind it... Currently thinking that something Spring based is the way to go. In true, "leverage a new tech if possible"-style, I am thinking maybe we can shoe horn some jruby into this, perhaps to make the APIs more readable, perhaps event-machine-like and to make the interface code more business-friendly, perhaps even storing the mapping code in the DB, as ruby snippets that get mixed in... but thats an aside... So, any comments/thoughts on the Spring approach - anything more up-to-date/relevant these days. EDIT: Looking a JRuby further, I am tempted to write it fully in JRuby... in which case do we need any frameworks at all, perhaps some gems to make things clearer... Thanks in advance, Chris

    Read the article

  • ASP.NET MVC: Moving code from controller action to service layer

    - by DigiMortal
    I fixed one controller action in my application that doesn’t seemed good enough for me. It wasn’t big move I did but worth to show to beginners how nice code you can write when using correct layering in your application. As an example I use code from my posting ASP.NET MVC: How to implement invitation codes support. Problematic controller action Although my controller action works well I don’t like how it looks. It is too much for controller action in my opinion. [HttpPost] public ActionResult GetAccess(string accessCode) {     if(string.IsNullOrEmpty(accessCode.Trim()))     {         ModelState.AddModelError("accessCode", "Insert invitation code!");         return View();     }       Guid accessGuid;       try     {         accessGuid = Guid.Parse(accessCode);     }     catch     {         ModelState.AddModelError("accessCode", "Incorrect format of invitation code!");         return View();                    }       using(var ctx = new EventsEntities())     {         var user = ctx.GetNewUserByAccessCode(accessGuid);         if(user == null)         {             ModelState.AddModelError("accessCode", "Cannot find account with given invitation code!");             return View();         }           user.UserToken = User.Identity.GetUserToken();         ctx.SaveChanges();     }       Session["UserId"] = accessGuid;       return Redirect("~/admin"); } Looking at this code my first idea is that all this access code stuff must be located somewhere else. We have working functionality in wrong place and we should do something about it. Service layer I add layers to my application very carefully because I don’t like to use hand grenade to kill a fly. When I see real need for some layer and it doesn’t add too much complexity I will add new layer. Right now it is good time to add service layer to my small application. After that it is time to move code to service layer and inject service class to controller. public interface IUserService {     bool ClaimAccessCode(string accessCode, string userToken,                          out string errorMessage);       // Other methods of user service } I need this interface when writing unit tests because I need fake service that doesn’t communicate with database and other external sources. public class UserService : IUserService {     private readonly IDataContext _context;       public UserService(IDataContext context)     {         _context = context;     }       public bool ClaimAccessCode(string accessCode, string userToken, out string errorMessage)     {         if (string.IsNullOrEmpty(accessCode.Trim()))         {             errorMessage = "Insert invitation code!";             return false;         }           Guid accessGuid;         if (!Guid.TryParse(accessCode, out accessGuid))         {             errorMessage = "Incorrect format of invitation code!";             return false;         }           var user = _context.GetNewUserByAccessCode(accessGuid);         if (user == null)         {             errorMessage = "Cannot find account with given invitation code!";             return false;         }           user.UserToken = userToken;         _context.SaveChanges();           errorMessage = string.Empty;         return true;     } } Right now I used simple solution for errors and made access code claiming method to follow usual TrySomething() methods pattern. This way I can keep error messages and their retrieval away from controller and in controller I just mediate error message from service to view. Controller Now all the code is moved to service layer and we need also some modifications to controller code so it makes use of users service. I don’t show here DI/IoC details about how to give service instance to controller. GetAccess() action of controller looks like this right now. [HttpPost] public ActionResult GetAccess(string accessCode) {     var userToken = User.Identity.GetUserToken();     string errorMessage;       if (!_userService.ClaimAccessCode(accessCode, userToken,                                       out errorMessage))     {                       ModelState.AddModelError("accessCode", errorMessage);         return View();     }       Session["UserId"] = Guid.Parse(accessCode);     return Redirect("~/admin"); } It’s short and nice now and it deals with web site part of access code claiming. In the case of error user is shown access code claiming view with error message that ClaimAccessCode() method returns as output parameter. If everything goes fine then access code is reserved for current user and user is authenticated. Conclusion When controller action grows big you have to move code to layers it actually belongs. In this posting I showed you how I moved access code claiming functionality from controller action to user service class that belongs to service layer of my application. As the result I have controller action that coordinates the user interaction when going through access code claiming process. Controller communicates with service layer and gets information about how access code claiming succeeded.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >