Search Results

Search found 7470 results on 299 pages for 'storage engines'.

Page 5/299 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Sizing Switches for Storage and Production

    - by Untalented
    Couple questions. Should you always completely separate the storage network switches from production switches or are VLANs fine to segment this traffic? Is there a golden rule here? How do you properly size a switch for your environment based on the specifications the manufacturer provide (Throughput, Forwarding Throughput, Stacking Throughput, Max Mac)? If you have two switch options and one has a maximum Mac address of 8,000 vs. another with 16,0000. What does this really mean to me? How do make sure one vs. another is sized properly for me? Besides VLAN and Jumbo Frame support, is there any other "Must" haves for a virtual environments production or storage networks? There is a wealth of knowledge on sizing SANs and such, but this seems equally important and it's quite challenging to find as much information. -- Just to add some tidbits of information for the environment. This setup above is referring to the data centers which supports two different locations which have about 100 users between the two in total. The storage traffic will be iSCSI and will be 3 ESXi Hosts and one SAN housing about 2.7TB of data. Since there is currently no storage network in place (no SAN), I'm having a hard time regarding #2 to really determine what backplane throughput and switch specifications will be sufficient.

    Read the article

  • Unlimited and multi-computer online storage solutions with automatic backup

    - by JRL
    As the title says, what are the existing online storage solutions that provide: unlimited storage automatic backup and allow for an unlimited number of computers (use not tied to a single computer)? There are several existing questions on this site related to online storage solutions, but none that is specifically targeted to what I want, so I thought I'd ask the question. This wikipedia article lists some of them, are there others? How do they compare in terms of price, feature set and ease of use? Update: Kinda disappointed no one has any answers to this so far. JungleDisk looks promising, anyone have experience with it? Update 2: To answer the comments, what I'm looking for definitely DOES exist. These solutions all seem to fit the bill: BackMii CrashPlan DataPreserve Humyo JungleDisk KeepVault SpiderOak And some of them are quite cheap (CrashPlan is $100 a year). For unlimited space and computers, I'd say that's pretty good. Does anyone have experience with CrashPlan or any other of the above solutions?

    Read the article

  • Can storage-spaces drives be moved to a replacement server when there is a failure

    - by Joe C
    I have tried to search here and Google, but cannot find a case explaining this. Storage spaces is similar to software raid. If the server fails due to motherboard or some other issue, can the drives that comprise that storage spaces config be moved to another win2k12 server without restoring from backup? This can be done in linux software raid. If so, does the storage space config have to be re-created prior to the move, or do the drives hold the config so they are essentially plug and play? Thanks.

    Read the article

  • Storage replication/mirror over WAN

    - by galitz
    Hello, We are looking at storage replication between two data centers (600km apart) to support an active-passive cluster design for disaster recovery. The OS layer will be mostly Windows Server 2003/2008 with some OpenSuSE Linux used for performance monitoring on VMWare or possibly XenServer. The primary application service to replicate is Nvision. Datacenter 1 will have two storage systems for local active-passive or perhaps active-active replication with Datacenter 2 used as a last resport disaster recovery site. We have a handle on most aspects, but I am looking for specific recommendations on storage platforms that can handle remote replication cleanly. Thanks.

    Read the article

  • Does Citrix XenServer have storage Migration

    - by Entity_Razer
    I'm trying to find out. I can't seem to find a defenitive yes / no answers so I thought I'd ask the ServerFault community this simple question: Does XenServer (in any version) support Storage migration such as VMWare's Storage VMotion capability, or Hyper-V's storage migration ? I'm trying to do a comparative study of all platforms but I can't find a website (preff. Citrix supported or any other "legit" source) where it say's a defenitive yes or no. Anyone able to answer this one for me ? Cheers !

    Read the article

  • Can you add preexisting storage pools in server 2012

    - by Justin
    I have been looking at Windows Server 2012's storage pools and it looks like an ideal solution for my home media center. One thing I couldn't find information on is adding a preexisting pool to a fresh server install. I ask this given the following situation: You install Windows Server 2012 and setup your storage pools You add disks over time to your pool A year later your drive with the operating system fails You replace the bad drive and reinstall server 2012. Now how do you add this preexisting storage pool full of data to your fresh install?

    Read the article

  • Does Citrix XenServer have storage Migration [closed]

    - by Entity_Razer
    I'm trying to find out. I can't seem to find a defenitive yes / no answers so I thought I'd ask the ServerFault community this simple question: Does XenServer (in any version) support Storage migration such as VMWare's Storage VMotion capability, or Hyper-V's storage migration ? I'm trying to do a comparative study of all platforms but I can't find a website (preff. Citrix supported or any other "legit" source) where it say's a defenitive yes or no. Anyone able to answer this one for me ? Cheers !

    Read the article

  • Will search engines discover that our old pages have been 301 redirected if there are no more links to them in the old site?

    - by Obay
    We've moved our website to a new domain. Thousands of our pages come from one PHP file in the old site (e.g. oldsite.com/news.php?id=<id>). So we added some code in news.php file to do a 301 redirect to the specific corresponding news article in the new website (newsite.com/news/<id>). We have not yet done a 301 redirect for the root of the old site (so we could display a notice to our users that we've moved), but all links inside it are already 301 redirected. My concern is that, when Google crawls our old website, it will no longer be able to find the old news articles and discover that they have been 301 Redirected -- is this correct? If so, does that mean our PageRank won't be carried over to the new site? I've also read that we would need to create a sitemap for the new site. Is it possible to indicate in the sitemap the old and new locations of specific pages? Because if not, how will Google know? (I'm not sure change of address in Webmaster Tools would be specific enough).

    Read the article

  • Which is better for search engines, repeated phrases or different phrases with the same meaning?

    - by George Botros
    When I'm designing an ads website I have two options: Let the advertiser to choose from some predefined lists to create the new ad. For Example: product list ( T-Shirt, Shorts, Suit, .....) Color list ( Black, Red, .....) Let the advertiser to write his own descriptive content for the product For Example "Amazing suit with a good price" I like the first Scenario but which is better for search engine optimization [SEO], repeated phrases or different phrases with the same meaning? Note : assuming each page will contain one or more ads

    Read the article

  • 404 code/header for search engines, on removed user content?

    - by mowgli
    I just got an email, from a former user on my website He was complaining that Google still shows the contact page he created on my site, even though he deleted it a month ago This is the first time in many years anyone requests this I told him, that it's almost entirely up to Google what content it wants to keep/show and for how long. If it's deleted on the site, I can't do much, other than request a re-visit from the googlebot The user-page already now says something like "Not found. The user has removed the content" TL;DR: But the question is: Should I generally add a 404 header (or other) for dynamic user content that has been removed from the site? Or could this hurt the site (SEO)?

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

  • Dirty Cache Dell Equallogic Storage Array

    - by Jermal Smith
    has anyone ever run into a dirty cache issue with a Equallogic SAN. Even after replacement of the controller cards in the Equallogic Storage Array fails offline with a dirty cache. I have listed steps here on my blog to bring the SAN online again, however this is not the best solution as it continues to fail. http://jermsmit.com/dirty-cache-dell-equallogic-storage-array/ If you have any info on this please share. Thanks, Jermal

    Read the article

  • Critique My Backup and Storage Plan

    - by MetaHyperBolic
    My current storage (RAID-1 off of a hardware RAID card) and backup (a spare drive) solutions for my home network are inadequate. I have too much data scattered on various one-off drives. It is time to evolve. Backups seem simple enough, at least: lots of big drives. However, I am bewildered by the number of choices for small home storage. The Drobo S looks appealing. So does the ReadyNAS. I am not looking for bunches of shiny features, I'm mostly interested in reliability. I am not interested in building Yet Another PC to create a file server or doing something in the cloud, or whatever. I'm stupid, so I am keeping it simple. Requirements for Main Volume: Starting working space roughly 2TB, with options for growth up to 5TB RAID or something RAID-like with at least one parity drive eSATA II for speed during backups Ability to shut down gracefully when alerted of low power by a UPS Optional but Desirable: Will take 2TB drives now with options for the larger 3TB drives coming in 2010-2011 Optional but Desirable: : RAID-6 or something similar, with two parity drives Optional but Desirable: : Hot spare Ethernet connection not required, as the volume will be shared via the same machines which runs my home print server Backups: Backup performed via ROBOCOPY in mirror mode to an external hard drive via a eSATA II connection. Start with rotating between two external 2TB hard drives, will go up to six external 2TB drives. Start with a weekly backup, move to a bi-weekly backup as more drives are added. Move to 3TB drives as the size of my main volume increases. Backup drives will be stored on an off-site location. Hard drives: I plan on buying all of the same model, but different batches from different vendors. I found a "burn-in" utility with which I can pound away on the drives for a couple of weeks before adding them to the backup pool or the main volume. I estimate that I am looking at roughly $1,500 to start, once I start throwing in two TB drives for backup and four for storage. So, are there any obvious flaws in my plan? What have I overlooked? Any suggestions for the storage device for my main volume that fits my requirements? Or do I just keep it simple, 2 drives in RAID-1, then perform due diligence with my backups, accepting that I will have to buy a whole new unit when my data grows past 2TB?

    Read the article

  • Good practices while working with multiple game engines, porting a game to a new engine

    - by Mahbubur R Aaman
    I have to work with multiple game engines, like Cocos2d Unity3d Galaxy While working with multiple game engines, what practices should i follow? EDIT: Is there any guideline to follow, that would be better as while any one working with multiple game engines? EDIT: While a game made by Cocos2d and done well at AppStore, then our target it to port to other platforms, then we utilize Unity3D. Here what should we do?

    Read the article

  • Why is C++ used for game engines? How about its future in game engines?

    - by kasperov
    C++, as I have seen, is being heavily used in 3d video game engines.... Is it because of the performance issues, legecy code or libraries such as DriverX? If performance, libraries and code infrastructure are the reasons, dosen't that make C++ indispensible, at least for game engines? (ie, we have no other option even in the very distant future). I asked this because, I have the right to know the upcomming future trends in game engines.

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • When to Use workflow engines?

    - by A01_
    I'm totally new to this concept from design perspective. I've worked in past on some of the workflow engines as programmer but never had a clarity on why we chose the work-flow engines in first place. And as programmer I know that there are at least 100 ways to do anything when you are writing code but only few of the ways are the best! I still don't understand which use cases are best solved by workflow engines (or rather their concept) than designing a good DI enabled application. I'm looking for any general characteristics of domain-neutral use cases, where work-flow engines are one of the the best options. So my question is: What are general characteristics of a requirement which can be taken as a signal for opting for a good workflow engine and coding around it? Cheers!

    Read the article

  • How to fake Azure Table Storage in .NET for Unit Testing?

    - by Erick T
    I am working on a system that uses Azure Table Storage. In other systems (e.g., SQL, File based, etc), I can write a fake that allows me to test my data persistence logic. However, I can't see an easy way to create a fake for the Azure Table Service. I could create a new IIS project that behaves the same way, but that isn't a good way to write a unit test, it is more of an integration test. Any thoughts on how to unit test data access code that uses the Azure Table Storage client? Thanks, Erick

    Read the article

  • Cannot remove storage account because of lease, but I already deleted the server [closed]

    - by djechelon
    I recently created a temporary virtual server on Azure. Then I deleted it. I wanted to delete the storage account associated with it because I didn't need it any more. The problem is that the VHD file is still associated to a non-existing virtual machine!! If I try to delete the VHD from Virtual Machines\Disks I get the Delete button greyed and the table tells me it's still associated with the old VM. If I go to storage administration and try to delete the blob from vhds/ directory I get there is an active lease. I've read on Azure forums that, in these case, one should try to force releasing the lease from the blob. I followed their instructions and downloaded their script, but running it failed. The script detected that the disk is associated to a Virtual Machine and can't be deleted. The problem is that I'm 1000000% sure that I already deleted the VM. In fact, I currently only have a single VM that has its own HD and is up and running fine! What can I do to delete that storage account that is probably sucking money from my pocket?

    Read the article

  • What is the best private cloud storage setup

    - by vdrmrt
    I need to create a private cloud and I'm searching for the best setup. These are my 2 most important requirements 1. Disk and system redundant 2. Price / GB as low as possible The system is going to be used as backup setup which will receive data 24/7 over SFTP and rsync. High throughput is not that important. I'm planning to use glusterfs and consumer grade 4TB hard-drives. I have worked out 3 possible setups 3 servers with 11 4TB HDD Setup up a replica 3 glusterfs and setup each hard drive as a separate ext4 brick. Total capacity: 44TB HDD / TB ratio of 0.75 (33HDD / 44TB) 2 servers with 11 4TB HDD The 11 hard-drives are combined in a RAIDZ3 ZFS storage pool. With a replica 2 gluster setup. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) 3 servers with 11 4TB consumer hard-drives Setup up a replica 3 glusterfs and setup each hard-drive as a separate zfs storage pool and export each pool as a brick. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) (Cheapest) My remarks and concerns: If a hard drive fails which setup will recover the quickest? In my opinion setup 1 and 3 because there only the contents of 1 hard-drive needs to be copied over the network. Instead of setup 2 were the hard-drive needs te be reconstructed by reading the parity of all the other harddrives in the system. Will a zfs pool on 1 harddrive give me extra protection against for example bit rot? With setup 1 and 3 I can loose 2 systems and still be up and running with setup 2 I can only loose 1 system. When I use ZFS I can enable compression which will give me some extra storage.

    Read the article

  • New whitepaper, “Why Oracle Sun ZFS Storage Appliance for Oracle Databases?” now available.

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Databases are the backbone of today’s modern business providing transaction integrity for key business systems such as payment engines or providing the core of analytical data for decision-making. These diverse use cases require a flexible, high performance and highly available storage platform. The ZFS Storage Appliance is ideally suited with its architecture providing a platform flexible enough to meet the ever-changing availability, capacity and performance requirements from the business. In this just published white paper the authors provide both business and technical evidence of the suitability of the Oracle ZSF Storage Appliance as primary storage for Oracle Database 11gR2 environments. Click here to download the whitepaper.

    Read the article

  • Recommended storage scheme for home server? (LVM/JBOD/RAID 5...)

    - by j-g-faustus
    Are there any guidelines for which storage scheme(s) makes most sense for a multiple-disk home server? I am assuming a separate boot/OS disk (so bootability is not a concern, this is for data storage only) and 4-6 storage disks of 1-2 TB each, for a total storage capacity in the range 4-12 TB. The file system is ext4, I expect there will be only one big partition spanning all disks. As far as I can tell, the alternatives are individual disks pros: works with any combination of disk sizes; losing a disk loses only the data on that disk; no need for volume management. cons: data management is clumsy when logical units (like a "movies" folder) are larger than the capacity of any single drive. JBOD span pros: can merge disks of any size. cons: losing a disk loses all data on all disks LVM pros: can merge disks of any size; relatively simple to add and remove disks. cons: losing a disk loses all data on all disks RAID 0 pros: speed cons: losing one drive loses all data; disks must be same size RAID 5 pros: data survives losing one disk cons: gives up one disk worth of capacity; disks must be same size RAID 6 pros: data survives losing two disks cons: gives up two disks worth of capacity; disks must be same size I'm primarily considering either LVM or JBOD span simply because it will let me reuse older, smaller-capacity disks when I upgrade the system. The runner-up is RAID 0 for speed. I'm planning on having full backups to a separate system, so I expect the extra redundancy from RAID levels 5 or 6 won't be important. Is this a fair representation of the alternatives? Are there other considerations or alternatives I have missed? And what would you recommend?

    Read the article

  • How to get an email notification when a USB storage device is inserted?

    - by karthick87
    We are running more than 600 Ubuntu systems in our company. It is a data centre so we have certain policies. We have disabled the usage of storage devices in all the Ubuntu systems. However we would like to configure email alerts. If someone inserts storage devices, we should get an email Alert with subject as below, Email Alert: STORAGE DEVICE FOUND on IP: 172.29.35.18 Note: Where as for Windows system, we have certain policies applied in our DC. So there is no problem with Windows system. We need to receive alerts for Ubuntu system also. Any way to accomplish the above task would be great.

    Read the article

  • 19" Rackmountable fast storage arrays

    - by Eruditass
    I'm in need of some fast (15+ GB/s write) and somewhat large (50 TB+) storage subsystem with a standard storage interface. I'd prefer something other than Fibre Channel, which admittedly appears to be the most standard interface. RAID 5 is sufficient. What are the cheapest / smallest products from vendors to look at for building this subsystem? I've looked at DDN and TMS so far. Small: Under 20U Cheap: Under a million Edit: I'd really like to cut back costs as much as possible at the expense of capacity. How cheap can these bandwidth requirements be met?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >