Search Results

Search found 5933 results on 238 pages for 'mass storage'.

Page 89/238 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • vsftpd with pam_winbind.so

    - by David
    I'm trying to setup vsftpd to use logins from our domain. I want the ftp users to be able to login using their active directory username/password and have be able to have full access to /media/storage/ftp/username. I setup pptp using winbind and it is working fine, so I belive the issue is with vsftpd and pam. The ftp server runs and gives 530 for the login. I turned on debug for the pam module, but I see nothing in the syslog. Vsftp only logs a wrong login in its log. /etc/pam.d/vsftpd auth required pam_winbind.so debug /etc/vsftpd.conf listen=YES listen_ipv6=NO connect_from_port_20=YES anonymous_enable=NO local_enable=YES write_enable=YES xferlog_enable=YES idle_session_timeout=600 data_connection_timeout=120 nopriv_user=ftp ftpd_banner=Welcome to Scantiva! Authorized access only! local_umask=022 local_root=/media/storage/ftp/$USER user_sub_token=$USER chroot_local_user=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd guest_enable=YES guest_username=ftp ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=NO force_local_logins_ssl=NO ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • Need hard disk recommendation for linux home server.

    - by neotracker
    Hello, I'm planing to build a little linux homeserver. It will mainly be used for storage and maybe as an media pc. I plan to build a software raid5 with 4 1.5TB or 2TB hard drives. I already decided to use the Western Digital Caviar Green 1.5 TB drive, but then I read about some problems with the WD green series about many drives failing and that they are not recommended for raid anyway. Of course, I couldn't find much facts on the issues so I thought I just ask here ;-) What hard drives would you recommended for a software raid5 setup? As I only need it for storage, the whole thing doesn't have to be too fast. So I prefer a cheap price and silence to great performance.

    Read the article

  • Start a ZFS RAIDZ zpool with two discs then add a third?

    - by Doug S.
    Let's say I have two 2TB HDDs and I want to start my first ZFS zpool. Is it possible to create a RAIDZ with just those two discs, giving me 2TB of usable storage (if I understand it right) and then later add another 2TB HDD bringing the total to 4TB of usable storage. Am I correct or does there need to be three HDDs to start with? The reason I ask is I already have one 2TB drive I'm using that's full of files. I want to transition to a zpool but I'd rather only buy two more 2TB drives if I can. From what I understand, RAIDZ behaves similarly to RAID5 (with some major differences, I know, but in terms of capacity). However, RAID5 requires 3+ drives. I was wondering if RAIDZ has the same requirement. If I have to, I can buy the three drives and just start there, later adding the fourth, but if I could start with two and move to three that would save me $80.

    Read the article

  • SQL server environment

    - by Olegas D
    Hello I'm considering a bit of changes in current sales environment. And trying to check all cons and pros. Current situation. SQL server (quite decent HP server - server1) + backup server (smaller Dell server - server2). all sql files and sql server itself are on the server1. If something goes wrong with server1 I will have to manually move to server2. Connecting to the sql server: 1 HQ (where server located) + 4 sites through VPN. Now I'm considering 2 scenarios: Buy some storage system + update existing servers (add ram, upgrade processors) and go for VMWare ESXI. Rent a server at a datacenter + rent virtual server in case real server goes down. Also rent some space at data storage to keep SQL files there. Have anyone considered these things and maybe found some good pros/cons list? ;) Thanks

    Read the article

  • iMac boot from linux partition on external drive

    - by user74757
    I have the following "setup:" iMac (no internal drive/dead) --------- (Firewire) ------- [[MAC OS X]] | | | | (USB) | | | | [[MISC STORAGE PARTITION] [MISC STORAGE PARTITION] [EXT2 UBUNTU PARTITION]] I routinely use the firewire drive to boot MAC OS X. However, I would like to boot from the linux partition of the USB drive. This linux partition had linux installed on it from a live cd, and during that process, I told the installer to install GRUB on the usb drive (which happened to be /dev/sdd). My question is, how do I get this disk to show up during the iMac option-boot? Currently, only the firewire MAC OS X option shows up. I have read about rEFIT, but that appears to install it to the Mac OS X disk (would that still work?)... Also mentioned was installing rEFIT to the internal EFI system partition, but I don't know if that is wise.

    Read the article

  • Many-to-many mapping with LINQ

    - by Alexander
    I would like to perform LINQ to SQL mapping in C#, in a many-to-many relationship, but where data is not mandatory. To be clear: I have a news site/blog, and there's a table called Posts. A blog can relate to many categories at once, so there is a table called CategoriesPosts that links with foreign keys with the Posts table and with Categories table. I've made each table with an identity primary key, an id field in each one, if it matters in this case. In C# I defined a class for each table, defined each field as explicitly as possible. The Post class, as well as Category class, have a EntitySet to link to CategoryPost objects, and CategoryPost class has 2 EntityRef members to link to 2 objects of each other type. The problem is that a Post may relate or not to any category, as well as a category may have posts in it or not. I didn't find a way to make an EntitySet<CategoryPost?> or something like that. So when I added the first post, all went well with not a single SQL statement. Also, this post was present in the output. When I tried to add the second post I got an exception, Object reference not set to an instance of an object, regarding to the CategoryPost member. Post: [Table(Name="tm_posts")] public class Post : IDataErrorInfo { public Post() { //Initialization of NOT NULL fields with their default values } [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } private EntitySet<CategoryPost> _categoryRef = new EntitySet<CategoryPost>(); [Association(Name = "tm_rel_categories_posts_fk2", IsForeignKey = true, Storage = "_categoryRef", ThisKey = "ID", OtherKey = "PostID")] public EntitySet<CategoryPost> CategoryRef { get { return _categoryRef; } set { _categoryRef.Assign(value); } } } CategoryPost [Table(Name = "tm_rel_categories_posts")] public class CategoryPost { [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } [Column(Name = "fk_post", DbType = "int", CanBeNull = false)] public int PostID { get; set; } [Column(Name = "fk_category", DbType = "int", CanBeNull = false)] public int CategoryID { get; set; } private EntityRef<Post> _post = new EntityRef<Post>(); [Association(Name = "tm_rel_categories_posts_fk2", IsForeignKey = true, Storage = "_post", ThisKey = "PostID", OtherKey = "ID")] public Post Post { get { return _post.Entity; } set { _post.Entity = value; } } private EntityRef<Category> _category = new EntityRef<Category>(); [Association(Name = "tm_rel_categories_posts_fk", IsForeignKey = true, Storage = "_category", ThisKey = "CategoryID", OtherKey = "ID")] public Category Category { get { return _category.Entity; } set { _category.Entity = value; } } } Category [Table(Name="tm_categories")] public class Category { [Column(Name = "id", DbType = "int", CanBeNull = false, IsDbGenerated = true, IsPrimaryKey = true)] public int ID { get; set; } [Column(Name = "fk_parent", DbType = "int", CanBeNull = true)] public int ParentID { get; set; } private EntityRef<Category> _parent = new EntityRef<Category>(); [Association(Name = "tm_posts_fk2", IsForeignKey = true, Storage = "_parent", ThisKey = "ParentID", OtherKey = "ID")] public Category Parent { get { return _parent.Entity; } set { _parent.Entity = value; } } [Column(Name = "name", DbType = "varchar(100)", CanBeNull = false)] public string Name { get; set; } } So what am I doing wrong? How to make it possible to insert a post that doesn't belong to any category? How to insert categories with no posts?

    Read the article

  • Restore files from certain increments using Duplicity

    - by luckytaxi
    Given the following backup sets ... Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Jun 21 11:27:26 2011 Chain end time: Tue Jun 21 11:27:59 2011 Number of contained backup sets: 2 Total number of contained volumes: 2 Type of backup set: Time: Num volumes: Full Tue Jun 21 11:27:26 2011 1 Incremental Tue Jun 21 11:27:59 2011 1 If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011): duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \ file:///storage/test/ restored-file.txt However, if I run the following command, it restores the from the latest set. duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \ ORIG_FILE file:///storage/test/ restored-file.txt What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.

    Read the article

  • Need hard disk recommendation for linux home server.

    - by neotracker
    Hello, I'm planing to build a little linux homeserver. It will mainly be used for storage and maybe as an media pc. I plan to build a software raid5 with 4 1.5TB or 2TB hard drives. I already decided to use the Western Digital Caviar Green 1.5 TB drive, but then I read about some problems with the WD green series about many drives failing and that they are not recommended for raid anyway. Of course, I couldn't find much facts on the issues so I thought I just ask here ;-) What hard drives would you recommended for a software raid5 setup? As I only need it for storage, the whole thing doesn't have to be too fast. So I prefer a cheap price and silence to great performance.

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • Standalone server setup for compute capacity

    - by mikera
    I'm developing an application for my company that will require a lot of compute capacity (running some very big mathematical calculations), and looking for some form of server setup to do this. For various reasons, we want to run this on-site in our office rather than hosting it externally. It's been a while since I last had to set up my own servers so I thought I would tap into the collective wisdom of serverfault! My broad requirements are: Budget $30-50k, with an aim to get as much compute capacity as possible for that budget 64-bit servers suitable to run Ubuntu Linux + Java Some relatively standalone rack that can be installed in secure office space Fast/low latency network connections between the servers, but don't really care about connectivity to the outside world Storage capacity shared between the servers - they don't necessarily need their own storage providing they can be booted from a common image Downtime can be tolerated (since the calculations are run in batch mode) The software itself is fault-tolerant, so there is no need for extra resiliency in the server setup (cheap replaceable commodity parts will be fine in general) Given these requirements what kind of setup would you recommend and why?

    Read the article

  • 5 x 3GB drives and 4 x 1500GB drive best raid setup?

    - by Zen_Silence
    Hello, I am building a file server my plan is the have the Operating system on one raid partition and the data storage on another partition. I currently have 5 x 3GB IDE drives that i would like to put the operating system on theses drives are old but that doesnt matter to me at the moment i have a ton of them so for this raid partition i would probably want to be able to pull out dead a drive and rebuild the array. My file partition is going to consist of 4 x 1.5TB SATA drives I would like the maximum storage with some redundancy. Any suggestions to which Raid level i should use would be greatly appreciated and if you could also suggest a PCI or PCI-e raid controller to handle theses arrays. Thanks in Advance, Zen_Silence

    Read the article

  • Partition falsly recognized as RAW

    - by Paul Hiemstra
    On my 2 TB data disk I have two primary partitions, one of 1.6 TB for data storage in Linux (ext3) and one of 300 GB for some additional data storage for Windows. I run a dual-boot Windows 7/Ubuntu 12.04 install. The issue I have that if I start my computer into Windows 7, bot the partitions on my 2TB data drive are not recognized. In stead, Windows 7 sees one 1TB partition with type RAW. However, if I reboot to Linux, and then back to Windows 7, the partitions are correctly recognized. The following two screenshots illustrate my situation. Before I reboot to linux: and after the reboot: I have two questions: What could cause this behavior? How can I solve this issue.

    Read the article

  • Experience with MooseFS?

    - by brown.2179
    Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I just want it to be simple and stable. Basically from what I can see for OS X it's between MooseFS and Gluster. Any other suggestions?

    Read the article

  • innodb memory usage mysql

    - by Tiddo
    I have a small vps, with only 256mb of ram, with maximum burst up to 512mb. When I configure my vps without innodb, it only uses 130 mb of ram, so that is no problem for me. But when I turn on innodb, The memory usage grows to about 300-400 mb. Is it possible to run innodb such that I won't exceed the 256mb? preferably I don't want to use more than 100mb for innodb. I already came across some sites which said I could limit the memory usage, but if I limit it to only 100mb will the db run well enough? (compared to for example the MyISAM storage engine) If 100mb is to little memory for innodb, can you recommend me any other storage engine which supports transactions?

    Read the article

  • DNS Pointer to old server name

    - by TechKnow Dude
    We have a SBS2003 server that was migrated to a new hardware platform, the computer name has changed but the domain is the same. The desktop's are trying to do offline files to the old server name. There is a nslookup entry for the old server name and a DNS entry for the old server. How do we safely remove the old DNS entry without breaking the computer offline folder storage locally. Can we change the pointing location on the offline file storage to point to the new server name.

    Read the article

  • Arch Linux doesn't mount two USB drive?

    - by unixbhaskar
    One is USB modem which is connected and working and problematic one is a plain vfat USB stick which contain data ....that is the fellow not mounting :( I have tried to see it by fdisk ....it doesn't mount autometically... Is there any UDEV rule for that?? Because I have put a udev rule to remove the usb-storage thing( I had to.. otherwise the usb modem wont get connected... it waits for storage to relase the port) Any idea and solution would be greatly appreciated PS: I am running Arch distribution. Thanks

    Read the article

  • SharePoint Session Management - which SQL Server option?

    - by frumious
    We're developing some custom web parts for our WSS 3 intranet, and have just run into something we'd like to use ASP.NET sessions for. This isn't currently enabled on the development server. We'd like to use SQL Server as the storage mechanism, because the production environment is a web farm with very simple load-balancing. There are 3 options you can choose from to set up the SQL Server session storage, tempdb, default separate DB, named DB. Both tempdb and default separate DB create a new DB to store certain information in; tempdb stores the actual session info in tempdb, which doesn't survive a reboot, and default separate DB stores everything in the new DB. Since you've got to create the new DB either way, my question is this: why would you ever choose to store the session info in tempdb? The only thing I can think of is if you'd like to have the ability to wipe the session by rebooting the server, but that seems quite apocalyptic!

    Read the article

  • ZFS Configuration advice

    - by rbarrette
    I need some advice on configuring ZFS. Here is what I have: Physical Disks: 4x 3 TB 2x 2 TB 2x 1 TB What is the best configuration for my Vdevs and storage pool. I want to maximaze space but still maintain redundancy. Should I just get 2 more 3TB's and just create 2x 3-3TB raid2z storage pools? Create a 1x 4-3TB raidz2 vdev? Can I put redundancy at the pool level and create individual vdevs for each drive and then add 2x 1TB+2TB striped vdevs to keep all vdevs the same size. Keep in mind I do need to migrate data from the smaller drives and am planning on adding more 3tb drives later on. What do you think?

    Read the article

  • centos 6 nfs: logs not showing anywhere

    - by ancillary
    Can someone please tell me where NFS logs in centos 6? Or perhaps where I can tell NFS to send logs? At the present time, there appears to be no such setting. Trying to get the thing to work without logs is quite frustrating. [root@houston netshare]# locate nfs| grep log [root@houston netshare]# [root@houston netshare]# grep -Rni "nfs" /var/log /var/log/anaconda.storage.log:23:20:41:33,962 DEBUG : registered device format class NFS as nfs /var/log/anaconda.storage.log:24:20:41:33,962 DEBUG : registered device format class NFSv4 as nfs4 This is a day-old centos 6 install from livecd and yum update has been run.

    Read the article

  • 20GB+ worth of emails in my /home what is a better solution for that?

    - by Skinkie
    My email storage requirements are outgrowing anything reasonable with respect to local mail storage. As we speak 99% of my home partition is filled with personal mail in Thunderbirds mail dirs. Needless to say, this is just painful, badly searchable and as history has proven me that backups work, but Thunderbird is capable of loosing a lot of mail very easily. Currently I have an remote IMAPS server (Dovecot) running for my daily mail, accessible from anywhere, which from my own practice works efficiently up to about 1000 emails. Then some archive directories should be used to move mail around. I have been looking into DBMail, but I wonder if I make my case worse or better which such solution. None of the supported database employ string deduplication or string compression out of the box, so is this going to help me with 20GB+ mail? What about falling back to a plain old IMAP server? A filesystem like ZFS would support stuff like GZIP transparently, which could help. Could someone share their thoughts? The 20GB mostly consists of mailinglists, and normal mail. Not things like attachments. To add some clarifications; As we speak, my mail is not server side indexed at all - only my new mail arrives at a remote IMAP server. It is all local storage from former POP3 accounts, local mirrored Gmail and IMAP accounts. In my perspective it is not Thunderbird that sucks, its fileformat that sucks. Regarding the 1000 mails. On the road I am using Alpine and MobileMail, quite happy with both of them, but some management is required to actually manage the mail. Sieve helps a lot with that, but browing through 10.000 e-mails is not fun, especially not on a mobile client. I am quite happy with Dovecot, never had any issues with it. I just wonder if this is the way to go. Or if there are any other better solutions. What my question is: what is the best practice solution that allows 20GB+ mails and is -on demand remotely accessible, easy to backup and archive worthy. It doesn't need to be available 24x7. The final approach I took was installing a local IMAP server (Dovecot), configured it for being my archive, using the following guide: http://en.gentoo-wiki.com/wiki/Dovecot/InstallThunderbird

    Read the article

  • Will 5 Terabyte NAS drive be compatible with Windows XP SP3 32 bit?

    - by TrevorBoydSmith
    (NOTE: The operating system (in this case Windows XP SP3 32 bit) we are using is not a choice.) I am trying to setup a short term storage device. First, I found a large 5 Terabyte NAS drive that would IMO fulfill my storage requirements. Second, I also found that Windows XP seems to have a hard drive size limit (see 'Is there a limit to the size of a hard drive for Windows XP pre-SP1?'): XP should handle up to 2 TB per volume after the service packs are applied. You are correct. There was a 137gb limit on the orginal pre service pack windows xp. This was addressed/fixed in SP1. My question is, will my Windows XP SP3 32 bit machine see the 5 Terabyte NAS and be able to read/write properly to the NAS drive?

    Read the article

  • Exchange 07 to 07 mailbox migration using local continuous replication

    - by tacos_tacos_tacos
    I have an existing Exchange Server ex0 and a fresh Exchange Server ex1, both 2007SP3. The servers are in different sites so users cannot access mailboxes on ex1 as from my understanding, a standalone CAS is required for this. I am thinking of doing the following: Enable local continous replication of the storage group on ex0 to a mapped drive that points to the corresponding storage group folder on ex1 At some point when the replication is done (small number of users and volume of mail), say on a late night on the weekend, disable CAS on ex0 (or otherwise redirect requests on the server-side from ex0 to ex1) AND change the public DNS name of the CAS so that it points to ex1. Will my plan work? If not, please explain what I can do to fix it.

    Read the article

  • How to know the level of a symlink in linux?

    - by ???
    For example, if a symlink a -> b b -> c c -> d say, the symlink level of a is 3. Then, is there any utility to get this info? And, also I want to get the expansion detail of a symlink, which will show me something like: 1. /abc/xyz is expanded to /abc/xy/z (lrwx--x--x root root) 2. /abc/xy/z is expanded to /abc/xy-1.3.2/z (lrwx--x--x root root) 3. /abc/xy-1.3.2/z is expanded to /abc/xy-1.3.2/z-4.6 (lrwx--x--x root root) 4. /abc/xy-1.3.2/z-4.6 is expanded to /storage/121/43/z_4_6 (lrwx--x--x root root) 5. /storage/121/43/z_4_6 is expanded to /media/kitty_3135/43/z_4_6 (lrwx--x--x root root) So I can diagnostic with the symlinks. Any idea?

    Read the article

  • Network Performance issue

    - by qubemarker
    We have three Ubuntu 10.04 servers. One server is a storage server and the other two servers are configured as clients. The storage server has a good amount of capacity and it is integrated with windows Active directory server for Authentication. I am uploading some video files from both clients to the server and when I am uploading data from any one client alone I get about 26 MB/s data transfer rate. When I upload data from both the clients simultaneously I am only getting about 8 MB/s from each client. I have gigabit ethernet cards in all of the servers and a L2 Managed gigabit switch for connectivity. I don’t know why the data transfer rate is decreasing so much in simultaneous read and write. I have tried all of the TCP stack related settings suggested here. Can any assist with getting better read/write performance out of this setup? Any help is appreciated.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >