Search Results

Search found 321 results on 13 pages for 'iscsi'.

Page 9/13 | < Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >

  • Can I use ext3 as a shared fs if I add a lock manager ?

    - by edomaur
    I need a cluster filesystem for an iSCSI device. The problem is that the servers to which it is connected generate datafiles which must be read by every other servers. Except for the writing and deleting of such files, I do not need a full locking scheme like in OCFS2 or GFS2. So, can I use a distributed lock manager (DLM) on top of an ext3 filesystem or must I use only specialized filesystem ?

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload?

    Read the article

  • can someone explain IOSTAT ouput?

    - by user37197
    i'am having IBM server with Redhat 5 ElsmP connected to the IBM Storage over iSCSI (in sdb ) can someone explain this output from iostat command avg-cpu: %user %nice %system %iowait %steal %idle 12.79 0.01 4.53 72.22 0.00 10.45 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 95.63 48.88 240.95 485589164 2393706728 sdb 29.20 350.49 402.08 3481983365 3994494696 move large file to sdb very slowly,it seem normaly?

    Read the article

  • SQL server in VMware

    - by UndertheFold
    Please provide your tips and best practices for virtualizing SQL Server in VMWare ESX I am interested in advanced configurations and settings. Please provide reasoning behind your recommendations Edit: Just to clarify, I already have over 70 Virtual SQL servers in separate clusters using an ISCSI equallogic San - What I am really looking for are those advanced configurations like: How you configured your disks / RDM's Do you make use of settings like Mem.ShareScanGHz - http://communities.vmware.com/thread/143828 - that are not well documented

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Is it possible to use Linux as a Fibre Channel Raid Disk Box?

    - by SvenW
    You probably all know the relatively simple RAID boxes exporting a bunch of SATA disks as one big drive via FC, SAS or iSCSI, like the HP StorageWorks MSA2000, Infortrends EonStore series or many different other models from different manufacturers. Is it possible to create such a device with Linux, a few disks and an FC controller, using the controller in the reverse direction than usual? This would come handy to test some ideas and concepts in an emerging SAN environment.

    Read the article

  • Article about Sun ZFS Storage Appliances

    - by Owen Allen
    Sun ZFS Storage Appliances are versatile storage systems. Discovering and managing them in Ops Center, though, makes them even more versatile. If you discover a Sun ZFS Storage Appliance in Ops Center 12c, you can create iSCSI and Fibre Channel LUNS, and make the LUNs available to server pools and virtualization hosts as a storage library. Barbara Higgins has written an excellent article that walks you through the process of setting up a Sun ZFS Storage Appliance and discovering and managing it in Ops Center. If you're looking into ways to make a Sun ZFS Storage Appliance work for you, it's worth a look.

    Read the article

  • SQL Server 2012 AlwaysOn Groups and FCIs Part 4

    This is Part 4 of a series on AlwaysOn and FCI integration in SQL Server. In this article we will learn how to add the iSCSI disk storage to our SQL Server nodes and build the cluster. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • You Couldn't Write it - Houston we have a problem!

    - by GrumpyOldDBA
    Note identities changed to protect the innocent (sic ). In a datacentre I have an iscsi san which provides storage for a SQL Cluster. It developed a fault and required replacement of a few parts, all hot swappable. Although we had suppport/warranty this did not include onsite so we arranged to have the parts delivered. The datacentre did not want to carry out the work so we had to arrange for the manufacturer to send an engineer. Times were arranged and interested/concerned parties put on standby...(read more)

    Read the article

  • Documentation Changes in Solaris 11.1

    - by alanc
    One of the first places you can see Solaris 11.1 changes are in the docs, which have now been posted in the Solaris 11.1 Library on docs.oracle.com. I spent a good deal of time reviewing documentation for this release, and thought some would be interesting to blog about, but didn't review all the changes (not by a long shot), and am not going to cover all the changes here, so there's plenty left for you to discover on your own. Just comparing the Solaris 11.1 Library list of docs against the Solaris 11 list will show a lot of reorganization and refactoring of the doc set, especially in the system administration guides. Hopefully the new break down will make it easier to get straight to the sections you need when a task is at hand. Packaging System Unfortunately, the excellent in-depth guide for how to build packages for the new Image Packaging System (IPS) in Solaris 11 wasn't done in time to make the initial Solaris 11 doc set. An interim version was published shortly after release, in PDF form on the OTN IPS page. For Solaris 11.1 it was included in the doc set, as Packaging and Delivering Software With the Image Packaging System in Oracle Solaris 11.1, so should be easier to find, and easier to share links to specific pages the HTML version. Beyond just how to build a package, it includes details on how Solaris is packaged, and how package updates work, which may be useful to all system administrators who deal with Solaris 11 upgrades & installations. The Adding and Updating Oracle Solaris 11.1 Software Packages was also extended, including new sections on Relaxing Version Constraints Specified by Incorporations and Locking Packages to a Specified Version that may be of interest to those who want to keep the Solaris 11 versions of certain packages when they upgrade, such as the couple of packages that had functionality removed by an (unusual for an update release) End of Feature process in the 11.1 release. Also added in this release is a document containing the lists of all the packages in each of the major package groups in Solaris 11.1 (solaris-desktop, solaris-large-server, and solaris-small-server). While you can simply get the contents of those groups from the package repository, either via the web interface or the pkg command line, the documentation puts them in handy tables for easier side-by-side comparison, or viewing the lists before you've installed the system to pick which one you want to initially install. X Window System We've not had good X11 coverage in the online Solaris docs in a while, mostly relying on the man pages, and upstream X.Org docs. In this release, we've integrated some X coverage into the Solaris 11.1 Desktop Adminstrator's Guide, including sections on installing fonts for fontconfig or legacy X11 clients, X server configuration, and setting up remote access via X11 or VNC. Of course we continue to work on improving the docs, including a lot of contributions to the upstream docs all OS'es share (more about that another time). Security One of the things Oracle likes to do for its products is to publish security guides for administrators & developers to know how to build systems that meet their security needs. For Solaris, we started this with Solaris 11, providing a guide for sysadmins to find where the security relevant configuration options were documented. The Solaris 11.1 Security Guidelines extend this to cover new security features, such as Address Space Layout Randomization (ASLR) and Read-Only Zones, as well as adding additional guidelines for existing features, such as how to limit the size of tmpfs filesystems, to avoid users driving the system into swap thrashing situations. For developers, the corresponding document is the Developer's Guide to Oracle Solaris 11 Security, which has been the source for years for documentation of security-relevant Solaris API's such as PAM, GSS-API, and the Solaris Cryptographic Framework. For Solaris 11.1, a new appendix was added to start providing Secure Coding Guidelines for Developers, leveraging the CERT Secure Coding Standards and OWASP guidelines to provide the base recommendations for common programming languages and their standard API's. Solaris specific secure programming guidance was added via links to other documentation in the product doc set. In parallel, we updated the Solaris C Libary Functions security considerations list with details of Solaris 11 enhancements such as FD_CLOEXEC flags, additional *at() functions, and new stdio functions such as asprintf() and getline(). A number of code examples throughout the Solaris 11.1 doc set were updated to follow these recommendations, changing unbounded strcpy() calls to strlcpy(), sprintf() to snprintf(), etc. so that developers following our examples start out with safer code. The Writing Device Drivers guide even had the appendix updated to list which of these utility functions, like snprintf() and strlcpy(), are now available via the Kernel DDI. Little Things Of course all the big new features got documented, and some major efforts were put into refactoring and renovation, but there were also a lot of smaller things that got fixed as well in the nearly a year between the Solaris 11 and 11.1 doc releases - again too many to list here, but a random sampling of the ones I know about & found interesting or useful: The Privileges section of the DTrace Guide now gives users a pointer to find out how to set up DTrace privileges for non-global zones and what limitations are in place there. A new section on Recommended iSCSI Configuration Practices was added to the iSCSI configuration section when it moved into the SAN Configuration and Multipathing administration guide. The Managing System Power Services section contains an expanded explanation of the various tunables for power management in Solaris 11.1. The sample dcmd sources in /usr/demo/mdb were updated to include ::help output, so that developers like myself who follow the examples don't forget to include it (until a helpful code reviewer pointed it out while reviewing the mdb module changes for Xorg 1.12). The README file in that directory was updated to show the correct paths for installing both kernel & userspace modules, including the 64-bit variants.

    Read the article

  • Eine komplette Virtualisierungslandschaft auf dem eigenen Laptop – So geht’s

    - by Manuel Hossfeld
    Eine komplette Virtualisierungslandschaftauf dem eigenen Laptop – So geht’s Wenn man sich mit dem Virtualisierungsprodukt Oracle VM in der aktuellen Version 3.x näher befassen möchte, bietet es sich natürlich an, eine eigene Umgebung zu Lern- und Testzwecken zu installieren. Doch leichter gesagt als getan: Bei näherer Betrachtung der Architektur wird man schnell feststellen, dass mehrere Rechner benötigt werden, um überhaupt alle Komponenten abbilden zu können: Zum einen gilt es, den oder die OVM Server selbst zu installieren. Das ist recht leicht und schnell erledigt, aber da Oracle VM ein „Typ 1 Hypervisor ist“ - also direkt auf dem Rechner („bare metal“) installiert wird – ist der eigenen Arbeits-PC oder Laptop dafür recht ungeeignet. (Eine Dual-Boot Umgebung wäre zwar denkbar, aber recht unpraktisch.) Zum anderen wird auch ein Rechner benötigt, auf dem der OVM Manager installiert wird. Im Gegensatz zum OVM Server erfolgt dessen Installation nicht „bare metal“, sondern auf einem bestehenden Oracle Linux. Aber was tun, wenn man gerade keinen Linux-Server griffbereit hat und auch keine extra Hardware dafür opfern will? Möchte man alle Funktionen von Oracle VM austesten, so sollte man zusätzlich über einen Shared Storag everüfugen. Dieser kann wahlweise über NFS oder über ein SAN (per iSCSI oder FibreChannel) angebunden werden. Zwar braucht man zum Testen nicht zwingend entsprechende „echte“ Storage-Hardware, aber auch die „Simulation“ entsprechender Komponenten erfordert zusätzliche Hardware mit entsprechendem freien Plattenplatz.(Alternativ können auch fertige „Software Storage Appliances“ wie z.B. OpenFiler oder FreeNAS verwendet werden). Angenommen, es stehen tatsächlich keine „echte“ Server- und Storage Hardware zur Verfügung, so benötigt man für die oben genannten drei Punkte  drei bzw. vier Rechner (PCs, Laptops...) - je nachdem ob man einen oder zwei OVM Server starten möchte. Erfreulicherweise geht es aber auch mit deutlich weniger Aufwand: Wie bereits kurz im Blogpost anlässlich des letzten OVM-Releases 3.1.1 beschrieben, ist die aktuelle Version in der Lage, selbst vollständig innerhalb von VirtualBox als Gast zu laufen. Wer bei dieser „doppelten Virtualisierung“ nun an das Prinzip der russischen Matroschka-Puppen denkt, liegt genau richtig. Oracle VM VirtualBox stellt dabei gewissermaßen die äußere Hülle dar – und da es sich bei VirtualBox im Gegensatz zu Oracle VM Server um einen „Typ 2 Hypervisor“ handelt, funktioniert dieser Ansatz auch auf einem „normalen“ Arbeits-PC bzw. Laptop, ohne dessen eigentliche Betriebsystem komplett zu überschreiben. Doch das beste dabei ist: Die Installation der jeweiligen VirtualBox VMs muss man nicht selber durchführen. Der OVM Manager als auch der OVM Server stehen bereits als vorgefertigte „VirtualBox Appliances“ im Oracle Technology Network zum Download zur Verfügung und müssen im Grunde nur noch importiert und konfiguriert werden. Das folgende Schaubild verdeutlicht das Prinzip: Die dunkelgrünen Bereiche stellen jeweils Instanzen der eben erwähnten VirtualBox Appliances für OVM Server und OVM Manager dar. (Hier im Bild sind zwei OVM Server zu sehen, als Minimum würde natürlich auch einer genügen. Dann können aber viele Features wie z.B. OVM HA nicht ausprobieren werden.) Als cleveren Trick zur Einsparung einer weiteren VM für Storage-Zwecke hat Wim Coekaerts (Senior Vice President of Linux and Virtualization Engineering bei Oracle), der „Erbauer“ der VirtualBox Appliances, die OVM Manager Appliance bereits so vorbereitet, dass diese gleichzeitig als NFS-Share (oder ggf. sogar als iSCSI Target) dienen kann. Dies beschreibt er auch kurz auf seinem Blog. Die hellgrünen Ovale stellen die VMs dar, welche dann innerhalb einer der virtualisierten OVM Server laufen können. Aufgrund der Tatsache, dass durch diese „doppelte Virtualisierung“ die Fähigkeit zur Hardware-Virtualisierung verloren geht, können diese „Nutz-VMs“ demzufolge nur paravirtualisiert sein (PVM). Die hier in blau eingezeichneten Netzwerk-Schnittstellen sind virtuelle Interfaces, welche beliebig innerhalb von VirtualBox eingerichtet werden können. Wer die verschiedenen Netzwerk-Rollen innerhalb von Oracle VM im Detail ausprobieren will, kann hier natürlich auch mehr als zwei dieser Interfaces konfigurieren. Die Vorteile dieser Lösung für Test- und Demozwecke liegen auf der Hand: Mit lediglich einem PC bzw. Laptop auf dem VirtualBox installiert ist, können alle oben genannten Komponenten installiert und genutzt werden – genügend RAM vorausgesetzt. Als Minimum darf hier 8GB gelten. Soll auf der „Host-Umgebung“ (also dem PC auf dem VirtualBox läuft) nebenbei noch gearbeiten werden und/oder mehrere „Nutz-VMs“ in dieser simulierten OVM-Server-Umgebung laufen, empfehlen sich natürlich eher 16GB oder mehr. Da die nötigen Schritte zum Installieren und initialen Konfigurieren der Umgebung ausführlich in einem entsprechenden Paper beschrieben sind, möchte ich im Rest dieses Artikels noch einige zusätzliche Tipps und Details erwähnen, welche einem das Leben etwas leichter machen können: Um möglichst entstpannt und mit zusätzlichen „Sicherheitsnetz“ an die Konfiguration der Umgebung herangehen zu können, empfiehlt es sich, ausgiebigen Gebrauch von der in VirtualBox eingebauten Funktionalität der VM Snapshots zu machen. Dies ermöglicht nicht nur ein Zurücksetzen falls einmal etwas schiefgehen sollte, sondern auch ein beliebiges Wiederholen von bereits absolvierten Teilschritten (z.B. um eine andere Idee oder Variante der Umgebung auszuprobieren). Sowohl bei den gerade erwähnten Snapshots als auch bei den VMs selbst sollte man aussagekräftige Namen verwenden. So ist sichergestellt, dass man nicht durcheinander kommt und auch nach ein paar Wochen noch weiß, welche Umgebung man da eigentlich vor sich hat. Dies beinhaltet auch die genaue Versions- und Buildnr. des jeweiligen OVM-Releases. (Siehe dazu auch folgenden Screenshot.) Weitere Informationen und Details zum aktuellen Zustand sowie Zweck der jeweiligen VMs kann in dem oft übersehenen Beschreibungsfeld hinterlegt werden. Es empfiehlt sich, bereits VOR der Installation einen Notizzettel (oder eine Textdatei) mit den geplanten IP-Adressen und Namen für die VMs zu erstellen. (Nicht vergessen: Auch der Server Pool benötigt eine eigene IP.) Dabei sollte man auch nochmal die tatsächlichen Netzwerke der zu verwendenden Virtualbox-Interfaces prüfen und notieren. Achtung: Es gibt im Rahmen der Installation einige Passworte, die vom Nutzer gesetzt werden können – und solche, die zunächst fest eingestellt sind. Zu letzterem gehört das Passwort für den ovs-agent sowie den root-User auf den OVM Servern, welche beide per Default „ovsroot“ lauten. (Alle weiteren Passwort-Informationen sind in dem „Read me first“ Dokument zu finden, welches auf dem Desktop der OVM Manager VM liegt.) Aufpassen muss man ggf. auch in der initialen „Interview-Phase“ welche die VirtualBox VMs durchlaufen, nachdem sie das erste mal gebootet werden. Zu diesem Zeitpunkt ist nämlich auf jeden Fall noch die amerikanische Tastaturbelegung aktiv, so dass man z.B. besser kein „y“ und „z“ in seinem selbst gewählten Passwort verwendet. Aufgrund der Tatsache, dass wie oben erwähnt der OVM Manager auch gleichzeitig den Shared Storage bereitstellt, sollte darauf geachtet werden, dass dessen VM vor den OVM Server VMs gestartet wird. (Andernfalls „findet“ der dem OVM Server Pool zugrundeliegende Cluster sein sog. „Server Pool File System“ nicht.)

    Read the article

  • SQL Cluster on Hyper V Failover Cluster

    - by Chris W
    We have a VM running SQL Server on a 6 node cluster of blades. The VM's data files are stored a SAN attached using a direct iSCSI connection. As this SQL server will be running a number of important databases we're debating whether we should be clustering the SQL Server or will the fact that the VM is running in the cluster itself sufficient to give us high availability. I'm used to running SQL clusters when dealing with physical servers but I'm a bit sketchy on what is best practice when all the servers are just VMs sat on Hyper V. If a blade running the VM fails I presume the VM will be started up on another load. I'm guessing the only benefit that adding a SQL cluster to the setup will give us it that the recovery time after a failure will be a little quicker? Are there any other benefits?

    Read the article

  • Hardware firewall vs VMWare firewall appliance

    - by Luke
    We have a debate in our office going on whether it's necessary to get a hardware firewall or set up a virtual one on our VMWare cluster. Our environment consists of 3 server nodes (16 cores w/ 64 GB RAM each) over 2x 1 GB switches w/ an iSCSI shared storage array. Assuming that we would be dedicating resources to the VMWare appliances, would we have any benefit of choosing a hardware firewall over a virtual one? If we choose to use a hardware firewall, how would a dedicated server firewall w/ something like ClearOS compare to a Cisco firewall?

    Read the article

  • ZFS on Linux for RHEL/OEL NFS Sharing

    - by BBK
    I'm trying ZFS on Linux for Oracle Linux (OLE) 6.1 (Red Hat RHEL 6.1 compatible clone). I successfully compiled and installed spl and zfs on it for Oracle Unbreakable Kernel. Zfs is working and I created mirror by zpool create -f -o ashift=12 tank mirror sdb sdc Now I'm trying to share my zfs pool caled "tank/nfs" as mentioned at zfsonlinux site. zfs set sharenfs=on tank/nfs So I created tank/nfs and set nfs to on. Now I'm trying to mount nfs share at local host to test it by mount -t nfs4 127.0.0.1:/tank/nfs /mnt But I get mount.nfs4: mount system call failed So question is: How to share NFS Folder or iSCSI Volumes at OLE rightly and mount it with Linux Client via ZFS on Linux.

    Read the article

  • RAIDZ vs RAID1+0

    - by Hiro2k
    Hi guys I just got 4 SSDs for my FreeNAS box. This server is only used to serve a single iSCSI extent to my Citrix XenServer pool and was wondering if I should setup them up in a RAIDZ or a RAID 1+0 configuration. This isn't used for anything in production, just for my test lab so I'm not sure which one is going to be better in this scenario. Will I see a major difference in speed or reliability? Currently the server has three 500GB Western Digital Blue drives and it's dog slow when I deploy a new version of our software on it, hence the upgrade.

    Read the article

  • NetApp and SQL Server?

    - by Edinor
    Do you have any good or bad experiences to share running SQL Server OLTP Systems on NetApp appliances? I have been working with a small, relatively low-volume cluster with a lower-end NetApp device, and I have found the environment to be generally unstable, at least compared to my experiences with other SANs, iSCSI arrays, and DAS setups. I struggle to believe that RAID DP and WAFL are more than fairy-dust technologies. A solution has been proposed to me that I just need a bigger, better NetApp, with PAM cards and other cool technology I've not heard of, and I feel like I would be better off spending a quarter of that on good direct-attached drives and a beefy server. At the same time, I feel that an Enterprise-class SAN should be something I can count on to be consistently a more stable, better performer than the less expensive solution I might propose. Are you a SQL Server DBA in an OLTP environment and love your NetApp? If you don't like them, why not?

    Read the article

  • Can ISC dhcpd operate as a proxy dhcp server for PXE boot?

    - by Matt
    I have an existing LAN with a DHCP server already dishing out IP addresses. For various reasons I cannot replace that server so it will still need to dish out IP addresses. I've been experimenting with dnsmasq in proxy mode to provide PXE boot filenames. Now I have dnsmasq chainloading iPXE ok, but I found that the problem with dnsmasq is that in proxy mode it won't send dhcp options down. So I can't seem to send option 17 to boot iscsi san. I read somewhere that it's not enabled in the source code. Oh well, so I thought perhaps I should try isc dhcpd (default version4 with ubuntu). But I can't find any configuration examples that work as a proxy. Does isc dhcpd even work in proxy mode? examples on the web imply patching the source. What other options do I have?

    Read the article

  • CORAID using only 1 of the 2 available NICs for AoE traffic

    - by Peter Carrero
    We got 6 CORAID shelves in my workplace. On 2 of them I see AoE traffic on only 1 of the 2 NICs that are attached to the SAN switch. We got jumbo frames enabled on all devices. Both NICs show up when I issue the aoe-interfaces command. This wouldn't bother me too much if the throughput performance observed on the "bad" shelves using bonnie++ wasn't half of the result of the "good" shelves. The "good" shelves are older SR1521 model and they have ReiserFS on their LUNS - not that I think it makes a difference - and the "bad" shelves are newer SR2421 model and have JFS. Any help as to what is going on and how to rectify this would be greatly appreciated. BTW: even the lower performing shelves outperform another iSCSI device we got, but that is another story... Thanks.

    Read the article

  • On Server Disk Storage VS SAN Storage

    - by Justin
    Hello, I am looking at buying three servers, and trying to figure out which storage solution makes the most sense in terms of performance and cost. Total budget is around: $10,000. OPTION 1: Dell servers with RAID 10 (4 Drives) each 7200RPM SAS 500GB, for a total capacity of 1TB. Each server is approx: $3000. Total storage then across all three servers is 3TB. OPTION 2: Same Dell servers with a cheap single drive no RAID for $2000 and go with a centralized SAN solution. The biggest problem is that I haven't been able to even find a SAN solution that is a reasonable price. Dell entry level storage servers are like $15,000. I am thinking just iSCSI, not fiber (too expensive). What do you guys recommend?

    Read the article

  • Home ZFS based NAS...What processor/chipset to use?

    - by MrBlargityBlarg
    So, I'm building a home/personal NAS. My plan is to expose both SMB fileshares for sharing files/media between hosts, but also to carve an iSCSI target LUN out of it for use by VMWare as a datastore. I want to use ZFS (software RAID) so that means I'll either be using FreeNAS, Solaris Express, or OpenIndiana. My question is basically: How much horsepower do I need? Obviously I/O is going to be my bottleneck but I want to be sure that I am not limiting my I/O because of a slow processor or chipset. So far the hardware plan is to use an Intel i3 and motherboard with one of the H87, Q87, or Z87 chipsets, a SAS controller (JBOD, no RAID) and if budget allows, I'm also hoping to get an SSD for the ZFS L2ARC and ZIL. Does anyone think I could get away with an Intel Atom or cheaper/less-capable processor/chipset than the i3 and [HQZ]87 listed above?

    Read the article

  • Setting up a Windows Server 2008 R2 DC + Fileserver : native or virtual?

    - by user126890
    I want to deploy a new DC + Fileserver using Windows Server 2008 R2 SP1 Standard Edition on a Dell PowerEdge R410 and iSCSI storage for a small business (~30 people). Should I install the system native on the server or use a virt layer? I don't have a budget for virtualization so i gotta go with something free... What's a better working routine, taking snapshots of vm's or taking backups (Acronis/CloneZilla) of systems? If I use a virt system, I need a GUI for some people in the business to reset the system to a earlier state in emergency situations. I wanted to install phpVirtualBox once but never finished, is it suitable in a productive environment? server specs: Intel Xeon E5620 CPU (2,40GHz, 4C, 12MB Cache) 8GB RAM Dual Rank LV RDIMMs 1333MHz 2x 1TB SATA 7,2K 3,5, RAID1

    Read the article

  • Snap Server 18000 connection help!

    - by sicko666
    I wonder if anyone here can help me. I have a home server setup made up of old secondhand computers, 2 servers running Windows Server 2003, 1 workstation running Windows 7, a 16 port switch & an adsl ethernet modem. All these connect and talk to each other fine but then I got a "Snap Server 18000" and a "Snap disk 30sa" sata array. When I turn the Snap on, it boots past the BIOS, runs a kernel, then displays: This device cannot be managed via the video/kbd/mouse interface. The video is now disabled. You may access the management functions from your web browser. Only, none of the other PCs detect it, so no browser can find it! I have checked all cables, and all LEDs indicate there's a connection. I have installed the windows "iscsi" and the adaptec "Snap Server Manager" on all PCs but still it's not detected. I don't know what else to do, please advise!

    Read the article

  • VMware networking - PortChannel or not?

    - by dunxd
    My ESX hosts each have 8 NICS. I have set up 2 NICs for our iSCSI SAN - each is connected to a different SAN switch. 2 NICs are set up for vMotion and Service Console - these are each connected to a different core switch (ports are trunked with VLANs dedicated to vMotion and Management) I now have four ports left over. Currently we have these set up each going into our default VLAN. Two NICs are connected to one core-switch and two are connected to the other. We decided to aggregate the connections to each switch - so they are teamed at the vswitch end, and port channelled at the physical switch end. I am now reading that port channelling these connections is not particularly useful, perhaps even over complicating things. Is there a particular problem with using port channels for VMware? What method provides the best balance between redundancy and performance?

    Read the article

  • Enabling 8021q on a nic

    - by Chris Phillips
    Hi, I'm trying to get a vlan interface on a bonded nic (Centos 5.5) and whilst the interface has been very happily created with vconfig I'm seeing no traffic on it at all. Running tcpdump and tshark on the underlying eth0 I see no sign at all of vlan tags in the traffic, and I'm wondering if there's somethign I'm missing on the server side as the network dept say they are sending me the tagged data. I've got the 8021q module loaded, however under lsmod it shows it's only being used by the cxgb3 module, for an unused onboard iSCSI card, whereas my nics (on an HP DL380 G7) are driven by bnx2 and e1000e modules. Should these modules be listing 8021q as used module? should I have something conrete in /etc/modprobe.conf? Thanks Chris

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >