Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 600/1260 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Should an event-sourced aggregate root have query access to the event sourcing repository?

    - by JD Courtoy
    I'm working on an event-sourced CQRS implementation, using DDD in the application / domain layer. I have an object model that looks like this: public class Person : AggregateRootBase { private Guid? _bookingId; public Person(Identification identification) { Apply(new PersonCreatedEvent(identification)); } public Booking CreateBooking() { // Enforce Person invariants var booking = new Booking(); Apply(new PersonBookedEvent(booking.Id)); return booking; } public void Release() { // Enforce Person invariants // Should we load the booking here from the aggregate repository? // We need to ensure that booking is released as well. var booking = BookingRepository.Load(_bookingId); booking.Release(); Apply(new PersonReleasedEvent(_bookingId)); } [EventHandler] public void Handle(PersonBookedEvent @event) { _bookingId = @event.BookingId; } [EventHandler] public void Handle(PersonReleasedEvent @event) { _bookingId = null; } } public class Booking : AggregateRootBase { private DateTime _bookingDate; private DateTime? _releaseDate; public Booking() { //Enforce invariants Apply(new BookingCreatedEvent()); } public void Release() { //Enforce invariants Apply(new BookingReleasedEvent()); } [EventHandler] public void Handle(BookingCreatedEvent @event) { _bookingDate = SystemTime.Now(); } [EventHandler] public void Handle(BookingReleasedEvent @event) { _releaseDate = SystemTime.Now(); } // Some other business activities unrelated to a person } With my understanding of DDD so far, both Person and Booking are seperate aggregate roots for two reasons: There are times when business components will pull Booking objects separately from the database. (ie, a person that has been released has a previous booking modified due to incorrect information). There should not be locking contention between Person and Booking whenever a Booking needs to be updated. One other business requirement is that a Booking can never occur for a Person more than once at a time. Due to this, I'm concerned about querying the query database on the read side as there could potentially be some inconsistency there (due to using CQRS and having an eventually consistent read database). Should the aggregate roots be allowed to query the event-sourced backing store for objects (lazy-loading them as needed)? Are there any other avenues of implementation that would make more sense?

    Read the article

  • Host SilverLight / WCF

    - by martin
    Hi all I have created my first silverlight app :-) It has a basic page and connects to a db to populate a list. the connection is done using wcf, so my silverlight connects to a ServiceReference that does the stuff. This all works fine when i run from VisualStudio. My problem is that i am not sure how to host this app. I created an account on aspspider uploaded my default.html, zap file(which i renamed to zip), and Database. this work fine until it needs to connect to the db. What do i need to upload to get the database connection working ? Thanks :-) Martin

    Read the article

  • Managing a difficult manager

    - by griegs
    I have a situation here at work. We are redeveloping our basic architecture across the entire company. Currently we have the following hierarchy; SQL Database <= Stored Procs not allowed. nHibernate Classes to convert nHibernate into our own objects Web Service <= for all external and [internal] calls. Class to take objects from Web Service and back into our own objects and then… Normal nTier application architecture such as Data Transformation Layer, Business layer etc. Within the database, when we are writing a hierarchy of objects to the database, say for example; Order Person Details Address Product Other We need to serialise the object and save it, in its entirety, to an image field in a table. No attempt has been made to store the objects in their own tables so that we can do useful stuff like report on it. This is an architecture that was implemented [way] before I started and as you can probably appreciate, is a complete nightmare not to mention slow as a wet weekend. We’re not even allowed to have stored procs within SQL server because in my boss’s last job they had a hundred or so and he had a problem identifying them all so therefore all stored procs are the devil. Now the same person that developed the above architecture has developed the new one. It came as no surprise that he’s essentially used the same framework only now it’s using DotNet 3.5 with interfaces and generics. We still have to go through web services, still need to serialise (everything), still not allowed to use stored procs etc. In fact, we’re only barely able to bang two rocks together here. He says to us that the framework is open for discussion but when you discuss it, unless you approve of his design, you are told flatly “No”. He simply won’t listen to any other suggestions. Even when you show him demo applications of his proposed architecture v’s yours and he can see the speed difference, he still won’t take that on board. So I guess my question is, and I know others have experienced the same things out there, how do I get through to someone like this? How do you convince someone to ditch Web Services for internal calls and applications? How do you demonstrate, and make it stick, that stored procs are a better way to go than ad-hoc sql statements? This is killing me. I don’t want to repeat the mistakes of the past and I certainly don’t want to write code that I know is going to be slow and cumbersome. Help!

    Read the article

  • MySQL auto increment

    - by mouthpiec
    Hi, I have table with an auto-increment field, but I need to transfer the table to another table on another database. Will the value of field 1 be 1, that of field 2 be 2, etc? Also in case the database get corrupted and I need to restore the data, will the auto-increment effect in some way? will the value change? (eg if the first row, id (auto-inc) = 1, name = john, country = UK .... will the id field remain 1?) I am asking because if other table refer to this value, all data will get out of sync if this field change.

    Read the article

  • Proper way to set class variables

    - by ensnare
    I'm writing a class to insert users into a database, and before I get too far in, I just want to make sure that my OO approach is clean: class User(object): def setName(self,name): #Do sanity checks on name self._name = name def setPassword(self,password): #Check password length > 6 characters #Encrypt to md5 self._password = password def commit(self): #Commit to database >>u = User() >>u.setName('Jason Martinez') >>u.setPassword('linebreak') >>u.commit() Is this the right approach? Should I declare class variables up top? Should I use a _ in front of all the class variables to make them private? Thanks for helping out.

    Read the article

  • Sonar default, meet "container state was: CONSTRUCTED"

    - by larry cai
    Environment: hudson/sonar/maven2 in ubuntu locally with default parameters And I got the log from hudson below, I can't figure out where is the problem. [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.0.1 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Derby [INFO] ------------- Analyzing Game of Life business logic module [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Can not execute Sonar Embedded error: Can not analyze the project Cannot stop. Current container state was: CONSTRUCTED [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Can not execute Sonar And I notice it also has problem when run it command line without hudson mvn sonar:sonar

    Read the article

  • asp.net mvc textbox date format

    - by maxxxee
    My form contains a textbox for date input. When submitted the data is used to add a row to the table. The view is strongly typed to the table. The problem is that the database server is configured with US date format. But the users need to use UK date format in the textbox. When users enter uk date format error is thrown. The database server configuration cannot be changed. So what can be done so that users can enter date in uk format?

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • Preserving text formatting for dynamic website

    - by Mohit
    Hello Folks, I am facing a problem while creating a dynamic website. I am building it for some pharma company, which have many products. The problem is, every product have different sections of description, and have to be formatted differently. I wanted to store all the product descriptions in the database, but at the same time preserve the formatting of each description. I also plan to provide an admin interface, where they could edit the product information themselves. I could use Joomla or any other CMS for that purpose, but i wanted to know if i want to build such a system of my own, where i could format the text in an editor and them save that thing into database and when i retrieve it, i get the same formatting. How could i do this? Also i wanted to do this in PHP. Thanks -- Mohit

    Read the article

  • mysql good way of programing

    - by Syom
    i have the video gallery in my database, which has over 200 000 videos. in my home page in the site i show exactly some videos, which must satisfy to some criteria. and so, what is the question. is it a good way to sort videos every time the home page opene, or i must save the sort results somewhere in the database, and refresh them only if something change. i think it can save me a lot of time. what you think about it. thanks in advance.

    Read the article

  • Secure ajax form POST

    - by user194630
    I was wondering how to develop a secure form post through AJAX. For example, i have: My HTML form. My JavaScript handling the submit. The submit url is "post_data.php" The posted data is: id=8&name=Denis The PHP verifies if variables id and name are POSTED and their data type. If this is ok it proceed to do some stuff on a database. My question is, how can i prevent someone from creating his own html form, outside my web site, or whatever, and posting false data to my PHP script? Imagine that data realy exists on my database, this could be bad. Thanks

    Read the article

  • SSRS 2008 - Textbox RTF

    - by Iceman
    Hello, We have rtf text stored in our database that looks like the following: {\rtf1\ansi\deff0{\fonttbl{\f0\fnil\fcharset0 MS Sans Serif;}{\f1\fnil MS Sans Serif;}} {\colortbl ;\red0\green0\blue0;} \viewkind4\uc1\pard\cf1\lang1033\f0\fs22 Some Text Here \f1 \par } It is my understanding that SSRS 2008 should be able to properly display this on a report. Has anyone been able to have this display correctly? Robert Bruckner's Advanced Reporting Services Blog A new feature of SSRS: Leverage the enhanced Textbox (aka "RichText") to define mixed formatting within the same textbox. In addition, HTML strings of text can be imported into the report from a database or other source. Any help is greatly appreciated.

    Read the article

  • C# Change dynamically NavigateUrl HyperLinkField

    - by Martijn
    Hi, In my code i create a HyperLinkField object. Depending on a database field value, i want to set the NavigateUrl property. This is my problem, i don't know how. With: objHF.DataNavigateUrlFields = new[] { "id", "Stype" }; i get my database field. Now i want to check the Stype value. Depeding on this value i want to set the page where to navigate to. How can i do this?? At the end i set my datasource to the gridview and after that i call the bind() method. I hope someone can help me out

    Read the article

  • Specify which row to return on SQLite Group By

    - by lozzar
    I'm faced with a bit of a difficult problem. I store all the versions of all documents in a single table. Each document has a unique id, and the version is stored as an integer which is incremented everytime there is a new version. I need a query that will only select the latest version of each document from the database. While using GROUP BY works, it appears that it will break if the versions are not inserted in the database in the order of version (ie. it takes the maximum ROWID which will not always be the latest version). Note, that the latest version of each document will most likely be a different number (ie. document A is at version 3, and document B is at version 6). I'm at my wits end, does anybody know how to do this (select all the documents, but only return a single record for each document_id, and that the record returned should have the highest version number)?

    Read the article

  • Updating Checked Checkboxes using CodeIgniter + MySQL

    - by Tim
    Hello I have about 8 check boxes that are being generated dynamically from my database. This is the code in my controller //Start Get Processes Query $this->db->select('*'); $this->db->from('projects_processes'); $this->db->where('process_enabled', '1'); $data['getprocesses'] = $this->db->get(); //End Get Processes Query //Start Get Checked Processes Query $this->db->select('*'); $this->db->from('projects_processes_reg'); $this->db->where('project_id', $project_id); $data['getchecked'] = $this->db->get(); //End Get Processes Query This is the code in my view. <?php if($getprocesses->result_array()) { ?> <?php foreach($getprocesses->result_array() as $getprocessrow): ?> <tr> <td><input <?php if($getchecked->result_array()) { foreach($getchecked->result_array() as $getcheckedrow): if($getprocessrow['process_id'] == $getcheckedrow['process_id']) { echo 'checked'; } endforeach; }?> type="checkbox" name="progresscheck[]" value="<?php echo $getprocessrow['process_id']; ?>"><?php echo $getprocessrow['process_name']; ?><br> </td> </tr> <?php endforeach; ?> This generates the checkboxes into the form and also checks the appropriate ones as specified by the database. The problem is updating them. What I have been doing so far is simply deleting all checkbox entries for the project and then re-inserting all the values into the database. This is bad because 1. It's slow and horrible. 2. I lose all my meta data of when the check boxes were checked. So I guess my question is, how do I update only the checkboxes that have been changed? Thanks, Tim

    Read the article

  • jqgrid ASP.NET MVC -- getting data right for the grid.

    - by SamM09
    Here is my dilemma, I have not been able to manipulate my data to a form fitting to jqgrid standards. This is my first time using the jqgrid and I've spent a lot of time reading up on it. My js code is as follows: jQuery("#list").jqGrid({ url: '/Home/ListContacts/', dataType: "json", contentType: "application/json; charset=utf-8", mtype: 'POST', colNames: ['First Name', 'MI', 'Last Name'], colModel: [ { name: 'First Name', index: 'FName', width: 40, align: 'left' }, { name: 'MI', index: 'MInitial', width: 40, align: 'left' }, { name: 'Last Name', index: 'LName', width: 400, align: 'left'}], pager: jQuery('#pager'), rowNum: 10, rowList: [5, 10, 20, 50], sortname: 'Id', sortorder: "desc", repeatitems: false, viewrecords: true, imgpath: '/scripts/themes/basic/images', caption: 'My first grid' }); }); what im getting from the database: [["4","Jenna","Mccarthy"],["56","wer","weoiru"]] Now correct me if I am wrong, but the index: in my colModel refers to the column names in my database right? Could someone point to a reference that is straight forward or just start me off with this I would be most grateful.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • ORM in the realworld

    - by josh
    I am begining a new project that i think will last for some years. Am in the point of deciding the ORM framework to use (or whether to use one at all). Can anyone with experience tell me whether orm frameworks are used in realworld applications. The problem i have in mind is this: The orm tool will generate for me tables and columns etc as i create and modify my entities. However, after the project has gone live and is in production, certain database changes will not be possible. Can this hinder the advancement of the project. If i had used a framework like ibatis for example, i know i would only need to adjust the sql statements based on the database changes. Can someone tell me whether ORM tools have survived the live environment. At my office, we use java based ERP that was done long ago and it was never done using any ORM framework. Regards. Josh

    Read the article

  • How to restore broken Ethernet functionality on Mac G5 running Mac OS 10.4.11 (Tiger)

    - by willc2
    I had a disk error that rendered my Mac unbootable. I repaired it with Tech Tool 4, but now networking does not work. Network Preferences reports that my Ethernet cable is unplugged. I know this is bogus because when I boot from an emergency partition, networking works correctly. Furthermore, wireless networking is also broken, which I tested with a known-good Wi-Fi dongle. Whenever I try to change Network Port Configurations by creating a New or Renaming an existing one, example: I get this message in the console: Error - PortScanner - setDevice, device == nil! Error - PortScanner - setDevice, device == nil! In sets of two as shown. When I try to invoke the Network Diagnostics app, it immediately crashes. My first thought is to reinstall Tiger with the Archive and Install method so I don't have to reinstall all my applications but I have lost my Tiger installer disk. My next thought is to buy Leopard for $107 on Amazon. If there is any way I can just repair my Tiger install I would be happy to save that money, though. This is not my main machine and I am loathe to put more money into it. How can I recover my network functionality? UPDATE: I found my Tiger install disk and tried an Archive and Install. It failed with an unhelpful error message along the lines of "Can't install, try again". I tried again but had the same error. My guess is, some corrupt or missing file in my User folder is preventing migration. I have a backup created with Super Duper that is a bit out of date but will startup the machine (with functional networking). I would love to just copy over the file(s) that got messed up but I don't even know where to look. What is the likely location of the System files that would cause the aforementioned symptoms?

    Read the article

  • how to connect java and mysql using mysql connector java 5.1.12

    - by user225269
    I'm still a beginner in java. I dont have any idea on how to import the files that I have downloaded into my java class. Its in this path: E:\Users\user\Downloads\mysql-connector-java-5.1.12 I don't know what to do with the files I extracted from the file that I have downloaded in the mysql site for me to connect my java application and mysql database. I'm using Netbeans 6.8. And have also installed wampserver. ive already check out This: http://stackoverflow.com/questions/2118369/java-trouble-connecting-to-mysql and this: http://stackoverflow.com/questions/1640910/connecting-to-a-mysql-database But they don't seem to have answers on how to make use of the mysql java connector file from mysql site. Please help, thanks.

    Read the article

  • A cycle in object graph detected in JPA.

    - by Nitesh Panchal
    Hello, I am trying to figure out this error since 5 hours without any success. SO i finally thought of posting in here. Please help i am really in big trouble. I am stuck on this and see no way of solving this error. This is my database structure tblBlogRegion BlogRegionId (primary key) BlogRegionName tblGadget GadgetId(primary key) GadgetName tblBlogs BlogId(primary key) Blogname BlogTypeId (reference key from tblSiteTerm tblSiteTerms SiteTermsId(primary key) SiteTermsName tblBlogGadgets BlogGadgetsId(primary key) BlogRegionId(foreign key from tblBlogRegion) BlogId(foreign key from tblBlog) GadgetId(foreign key from tblGadget) Is it not normal database structure? Do you see anything that is cyclic? WHen i try to fetch list from tblGadgets i get this error :- [com.sun.istack.SAXException2: A cycle is detected in the object graph. This will cause infinitely deep XML: entity.BlogGadgets[blogGadgetsId=1] -> entity.Blogs[blogId=2] -> entity.BlogGadgets[blogGadgetsId=1]] I am trying to get list from web service using JAS-WS.

    Read the article

  • Memory efficient import many data files into panda DataFrame in Python

    - by richardh
    I import into a panda DataFrame a directory of |-delimited.dat files. The following code works, but I eventually run out of RAM with a MemoryError:. import pandas as pd import glob temp = [] dataDir = 'C:/users/richard/research/data/edgar/masterfiles' for dataFile in glob.glob(dataDir + '/master_*.dat'): print dataFile temp.append(pd.read_table(dataFile, delimiter='|', header=0)) masterAll = pd.concat(temp) Is there a more memory efficient approach? Or should I go whole hog to a database? (I will move to a database eventually, but I am baby stepping my move to pandas.) Thanks! FWIW, here is the head of an example .dat file: cik|cname|ftype|date|fileloc 1000032|BINCH JAMES G|4|2011-03-08|edgar/data/1000032/0001181431-11-016512.txt 1000045|NICHOLAS FINANCIAL INC|10-Q|2011-02-11|edgar/data/1000045/0001193125-11-031933.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-11|edgar/data/1000045/0001193125-11-005531.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-27|edgar/data/1000045/0001193125-11-015631.txt 1000045|NICHOLAS FINANCIAL INC|SC 13G/A|2011-02-14|edgar/data/1000045/0000929638-11-00151.txt

    Read the article

  • MySQL "OR MATCH" hangs (long pause with no answer) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated.

    Read the article

  • How to the view count of a question in momery?

    - by Freewind
    My website is like stackoverflow, there are many questions. I want to record how many times a question has been visited. I have a column called "view_count" in the question table to save it. When a user visited a question many times, the view_count should be increased only 1. So I have to record which user has visited which question, and I think it is too much expensive to save them in the database, because the records will be huge. So I want to keep them in the memory, and persistent the number to database every 10 minutes. I have searched about "cache" of rails, but I haven't found an example. I need an simple sample of how to do this, thanks for help~

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >