Search Results

Search found 6317 results on 253 pages for 'persistent storage'.

Page 169/253 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • why is there so much variance in prices for a 2-bay NAS?

    - by jcollum
    I'm considering buying a 2bay NAS for media storage. I'm perplexed by the variety of prices. They go from about $115 to $1200. The only thing I could see that differentiated the high end drive was encryption and a dual gigabit ethernet port. I don't understand how that can add up to $800+ dollars. Clearly I should know why there's this price variance before considering buying a 2 Bay NAS. Newegg link to 2 Bay NAS Should I move this question to serverfault?

    Read the article

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • Choosing Truecrypt volume names and keyfile names

    - by Howiecamp
    Any recommendations on what to name Truecrypt volumes (container files) and where to locate them? Certainly a name like "this is a truecrypt volume.tc" isn't a good idea. Any recommended storage locations? Same question for keyfiles that are generated with Truecrypt. Finally, lets say you choose an existing file, ymca.mp3, as your keyfile. Given that that file is innocuous and normal looking, isn't it easy to forget that's your key file so when you get sick of the Village People and delete the song you're hosed?

    Read the article

  • Resilient Linux Mail Server Setup

    - by Coops
    How would people design a resilient mail server setup with Linux? On an application level what the system needs to provide is both an incoming and outgoing mail service (i.e. SMTP & IMAP), along with filtering and archive storage (the archive part isn't critical yet, so we'll look at this later probably). What is required on top of this is a resilient system, i.e. one which will handle individual server failures without interrupting service. As such I would term this a High Availability mail system. This is in contrast to a High Performance mail setup, as in our case the volume of mail being handled isn't the important factor, it's simply that it stays online. Having not approached this problem before, the first thing I thought of was a clustered file system (gfs/gluster/etc), combined with heartbeat to failover a floating IP to another box in the case of a server failure. Combined with postfix & dovecot does this sound feasible to people?

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

  • File store: CouchDB vs SQL Server + file system

    - by Andrey
    I'm exploring different ways of storing user-uploaded files (all are MS Office documents or alikes) on our high load web site. It's currently designed to store documents as files and have a SQL database store all metadata for those files. I'm concerned about growing out of the storage server and SQL server performance when number of documents reaches hundreds of millions. I was reading a lot of good information about CouchDB including its built-in scalability and performance, but I'm not sure how storing files as attachments in CouchDB would compare to storing files on a file system in terms of performance. Anybody used CouchDB clusters for storing LARGE amounts of documents and in high load environment?

    Read the article

  • How to set default permissions for automounted FAT drives in Ubuntu

    - by piman
    I've got many FAT32 drives that I'd like to mount in Ubuntu such that they have permission mode 700 for directories and 600 for all other files. By default, they have 755 for all files, which is not particularly useful since almost no non-directories should be executable, and it screws up version control repos hosted on the drives. "Back in the day" I would have had the drives listed in /etc/fstab with the umask/dmask I want and there was no such thing as a default. These days, drives automount under their volume names. Which is great, except now I have no idea how to set the default. I have tried changing the /system/storage/default_options/vfat/mount_options gconf key with no apparently effect. It was 077 initially but the mounted drive reflected a default of 022; changing it and re-inserting the drives resulted in the files still having permission bits of 755.

    Read the article

  • Unable to Uninstall Exchange 2010 ("Internet Newsgroups" public folder)

    - by helplessITguy
    I am trying to uninstall Exchange 2010, before installing a new instance of Exchange 2010 SP1 on a different server. (Our production Exchange server is 2003) We have met all of the Mailbox uninstall prereqs except for the following: Error: Uninstall cannot continue. Database 'Public Folder Database 1579722947': The public folder database "Public Folder Database 1579722947" contains folder replicas. Before deleting the public folder database, remove the folders or move the replicas to another public folder database. For detailed instructions about how to remove a public folder database, see http://go.microsoft.com/fwlink/?linkid=81409&clcid=0x409. Recommended Action: We have been able to delete all Public Folders in the 2010 storage group except for the one (previously replicated) folder - "Internet Newsgroups". How can I delete this folder without impacting public folders on the production Exchange 2003 server? We have: verified permissions to the public folder removed replication for the folder on (on the Exch 2010 server) tried PowerShell scripts: RemoveReplicaFromPFRecursive Get-PublicFolder -Server "\" -Recurse -ResultSize:Unlimited | Remove-PublicFolder -Server -Recurse -ErrorAction:SilentlyContinue

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • Why My Hi-Tech Chemical Compound Lost Its Full Effectiveness?

    - by Boris_yo
    I have bought this thing a month ago: It is so far best dust cleaner that i have ever used and all was well until i left it out of its package for half day. Then it became sturdy and could be torn apart, less sticky and flexible which i video'ed here. Can this item be restored back to fully functioning state as it was before? Maybe it just became dry and i should put it in a place with moisture? Since all was in Chinese, i also do not know the storage conditions that must be met.

    Read the article

  • How to setup firewall to allow internet connection sharing via Wifi USB stick?

    - by hannanaha
    I have a Windows8 computer linked to the internet via an ethernet cable ("Ethernet" network connection). I have attached to it a DLink Wifi USB stick, and I'm trying to share the main PC's internet connection with my Android phone via a local wifi network. I am using the following batch file to set up this network: netsh wlan set hostednetwork mode=allow ssid=MyWifiName key=password keyUsage=persistent netsh wlan start hostednetwork After I run this script, I can see a new network connection appear in "Control Panel\Network and Internet\Network Connections" named "Local Area Connection *12", and I can see "MyWifiName" on the Android phone. The device name for this connection on the PC is "Microsoft Hosted Network Virtual Adapter". I also set up the "Ethernet" connection to share Internet with "Local Area Connection *12". However, the Android phone usually doesn't manage to obtain an IP from the wireless network, and when it does, there still seems to be no connectivity to the internet. When I turn off the Windows Firewall completely, or even just for "Local Area Connection *12", the Android connection is perfect. My questions are: How should I set up the Windows firewall to allow the phone to connect properly? Is there a specific rule I need to add to the Windows firewall advanced settings? [Note: the above method worked great in Windows 7, without any specific tinkering with the firewall]. Is it safe to turn off the firewall specifically for the "Local Area Connection *12" (the wifi connection) if the main Ethernet connection is still protected by the firewall? Thanks in advance.

    Read the article

  • VMWare ESXi 5 - Expanded RAID 5 array - cannot access datastore

    - by Dayton Brown
    I'm using VMWare ESXi 5 and had a 2 TB RAID 5 setup on an HP DL360 with a P400i RAID card. I added two more 1 TB drives and using the SmartStart ACU, added the drives and expanded the logical disk. Now after booting back to ESXi, the server boots, but lists no available persistent storage. I've rescanned multiple times to no avail: the Datastore doesn't show up. I booted to GParted and the 1.8TB partition shows up, but it shows as unknown. Anyone have any good ideas? EDIT: Final Solution So after much gnashing of teeth, it was fairly simple to solve. I purchased an eSata 2 TB external drive and a PCI eSata card for my server. I then used Clonezilla to image the current partitions to my new external drive. You have to check "don't check drive sizes" in advanced mode, otherwise it will yell at you for have a smaller drive. For some reason my PCI card wouldn't boot on my HP server, so I hooked the drive up to another desktop I had, booted to VMWare, and copied the vmdk's to another drive. I'm going to blow out the RAID config and then create 1.5TB logical drives.

    Read the article

  • For Windows XP: How to uninstall Broadcom Bluetooth software that won't disappear

    - by T. Webster
    Hello, my situation is similar to this question for Windows 7, except my OS is Windows XP SP3. I have recently realized I made the mistake of buying a Bluetooth adapter and installing the Broadcom/Widcomm Bluetooth stack driver software. Now that I know that the software is no good, I want to install it (so I can install the Toshiba stack for the Cirago adapter). I've attempted to uninstall all Bluetooth driver software in Device Manager, and I don't see any remnants of any Bluetooth drivers there. I would include pictures here, but I don't have 10 reputation yet, so I'll just use links instead. picasaweb.google.com/lh/photo/2NAos6BP9UigPtJKaYRGyQ?feat=directlink But I do see that the little Bluetooth icon still persists: picasaweb.google.com/lh/photo/kbxgPP8ZLxhjX4ob2YjSew?feat=directlink I don't see anything about Bluetooth, Broadcom, or Widcomm in Add/Remove Programs. I don't see any folder names Broadcom or Widcomm in Program Files folder, either. But I do see that Broadcom does show up in the registry with respect to Bluetooth, as shown here. picasaweb.google.com/lh/photo/hWDo-OzB8ApibzeHA_saxQ?feat=directlink What should I do now to completely wipe this persistent Broadcom Bluetooth software off my computer?

    Read the article

  • RAID--0 " TWO " DRIVES SSD ONLY Should I use on-board / Software RAID OR a RAID Card / Control

    - by Wes
    I am looking at going with a TWO Drive Only SSD RAID-0 Configuration And was wondering if I would get better performance / Speed from the Use of a RAID Controller / Card Verses just using the Software RAID on my Mother Board. I have herd conflicting reports , Again I only Plan on Running " 2 " SSD Drives in RAID-0 Config I have No- problem spending the extra money for a good controller but only if I am going to benifit performance wise , Otherwise if there is no notable Gain I will just use the Software RAID that my HP-180-T came with Intel- 3.33 GHZ , 6-Core , 12-GB of DDR-3. I have a huge External drive for All Storage and am not concerned about Data loss just looking for pure speed. And if a Controller will benifit my performance Wht type of card would one suggest?

    Read the article

  • Anyway I can trick Carbonite into backing up an external hard drive?

    - by Brian
    I use carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 gig) so I went out and purchased an external hard drive. However, Carbonite will not back it up. I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • Completely replacing (upgrading) a RAID 5 array of disks on an ESXi server

    - by jshin47
    I have a development server that runs several VM on ESXi 5. It has an array of disks in the RAID 5 configuration where all of the disks are currently the same size. I would like to expand storage on this box greatly, but I am not sure what the smartest way to go about this would be. My current plan is to: Turn off all VM Copy VM folders from server to another location Verify that I can mount all the VM on the new location (ie that the copy went ok) Replace all the disks with new, bigger ones Reinstall ESXi5 Copy the VM back over This seems like it might take a while to accomplish and is not terribly slick, especially since I will have to reconfigure ESXi 5, but is there a smarter alternative?

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • Is this a valid backup strategy for MongoDB?

    - by James Simpson
    I've got a single dedicated server with a MongoDB database of around 10GB. I need to do daily backups, but I can't have downtime with the database. Is it possible to use a replica set on a single disk (with 2 instances of mongod running on different ports), and simply take the secondary one offline and backup the data files to an offsite storage such as S3 (journaling is turned on)? Or would using master/slave be better than a replica set? Is this viable, and if so, what potential problems could I have? If not, how do I conceptualize this to work?

    Read the article

  • How can I VPN into a domain and act as if it is local?

    - by Duall
    I've looked and looked and the internet seems to have finally failed me, there's no information on my specific issue. I have two locations, and one server (sbs 2008); I need them to act like one physical LAN. I know it's possible, I just know how to set it up, or anything about VPNs, etc. So, just to clarify a bit more: 1 Small Business Server 2008 ~30 Computers with varying systems from Windows XP to 7 2 Physical Locations I use redirected folders, but since that would consume a lot of bandwith over VPN, I have a machine over there that can be the workhorse for storage. Users need to access e-mail and shared databases from this server (or possibly some form of DFS with aforementioned workhorse) from the other building. Is there a nice tutorial (or I nice person that wants to write it) online that can explain how to set up computers from an external network to act like they were in the LAN?

    Read the article

  • What is the specific advantage of a blade server for virtualisation?

    - by ChrisZZ
    We are planning to implement a VDI Solution. We had some discussions about Blade vs Rack. As we are only planning to implement 75-100 Clients, we calculated, that we would need 2 Servers with Dual 8C Processors - and a shared storage server. This calculation is based on a paper by ORACLE, that says, 12 active virtual machines per core. Now, for buying to servers, a blade does not scale financially. But the Blade has some other advantages: a) The interconnectivity between the blades is super-fast. b) IO Virtualisation Are there other advantages, that we should consider, that would make up for price - and are this advantages so important, that we should think about investing in the blade?

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • How to use symbolic links in windows server 2008R2 across the network (mklink)

    - by server info
    I have One Server (Srv1) which holds data with file shares and the storage is full. Now I have second Server (Srv2) which has alot more space. No I would like to transfer all the data von Serv1 to Serv2 and have links to the new destination. I found mklink very useful here but unfortunately it does not work over the network. Which also points the docu out. People heavily rely on the path's so it would be helpful if somone has a pointer for me... how to handle symbolic links a cross the network with Windows Servers. I am running Windows Server 2008. Thanks for any help

    Read the article

  • Running SQL 2008 on a VM

    - by chris.w.mclean
    We are pondering trying to set up a SQL 2008 instance inside a VM for a production environment. All our SQL instances use iSCSI over gigabit ethernet to talk to a NAS, as would this new instance. Any reason this is a bad idea or any considerations to make this work well? The VM would be running in Xen 5.5 or we could set it up in Hyper-V if there's a compelling case for that. And the VM's VHD would be stored on a different NAS then the SQL storage is on.

    Read the article

  • glusterfs to replicate files to other servers

    - by sbrattla
    I've got multiple servers which all need to have the same content in /home. In other words, if the file /home/user1/test.txt is updated on server A, this needs to be replicated to all other servers in the cluster. Is it possible to use GlusterFS for this purpose? That is, let each server have a full copy of all data locally - which that server will be working on - and solely use GlusterFS to take care of replicating this data to the other servers? I'm not intersted in a combined storage, but rather have all data on all machines only to have GlusterFS to replicate it to the other machines.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >