Search Results

Search found 5528 results on 222 pages for 'offsite storage'.

Page 144/222 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Win 7 Explorer backup and long paths

    - by user53299
    I use Explorer to do backups because Win 7's backup program asks me to take backups previously done and to put them back in the drive. I am opposed to that idea since I believe backups should remain in storage. With Explorer backups (burn and burn to disc) I have encountered the "destination path too long" error message and it shows the name of a folder "Debug" three times. I have hundreds of folders named "Debug" thanks to Visual Studio. At this moment I'm too angry at Microsoft to write a program to determine my 3 longest paths. (Aside: This is all after coincidentally reading two articles about path junctions earlier this evening which already made me kind of unhappy.) Please, is there an easy way to continue to make backups with Explorer? Edit: I should add that renaming paths wrecks Visual Studio projects so I really need to isolate the small number of problem paths or find a cleaner solution.

    Read the article

  • What is the specific advantage of a blade server for virtualisation?

    - by ChrisZZ
    We are planning to implement a VDI Solution. We had some discussions about Blade vs Rack. As we are only planning to implement 75-100 Clients, we calculated, that we would need 2 Servers with Dual 8C Processors - and a shared storage server. This calculation is based on a paper by ORACLE, that says, 12 active virtual machines per core. Now, for buying to servers, a blade does not scale financially. But the Blade has some other advantages: a) The interconnectivity between the blades is super-fast. b) IO Virtualisation Are there other advantages, that we should consider, that would make up for price - and are this advantages so important, that we should think about investing in the blade?

    Read the article

  • HDD situation - what would be best - data and backup

    - by Sam Johnson
    I just installed W8 on an Intel 330 180 GB SSD. I have 3 1TB HDDs. 1 HDD will be external for backup. 2 HDDs are then available for my PC. I do not need 2 TB of storage, so I thought I'd set these up to be exact clones of one another, so that if one dies I have a backup in the computer to go along with my external. Is this a good set up? How best would this be accomplished? I've heard people suggest RAID but I've never done RAID, have no idea what it is, and have no idea how to set it up in my BIOS. Thanks in advance

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • What are the cheap CDN for Origin Pull?

    - by DucDigital
    I've read several thread around ServerFault about this, but still I am not satisfy with the answer so I post a question here. I need a Origin Pull CDN that support big file (more than 200MB). I don't need a storage place since they are too small, just to relay the server. Also the price should be afforable, ofcourse not more than 150$ a month for their smallest plan. I also need to pay by credit card since I do not work or stays in the US so it's hard for me to do a bank wire. Thank you very much

    Read the article

  • Archive Outlook mail items into SQL Server

    - by marc_s
    I am looking (and so far not finding any) for a solution to archive e-mail items from my Outlook into SQL Server. My PST is beginning to get really really big, and I'd love to extract my older e-mail into SQL Server in a way so I can still easily find mails if needed. I would prefer SQL Server as the storage medium since I'm familiar with it, and it's rock solid - I don't want to have a collection of PST files or CHM files or anything like that. Does anyone know of such a solution? I'm a power/home user - I can't afford $5'000 enterprise licenses - I need a sub-$100 solution for private use.

    Read the article

  • Can someone explain the physical architecture of RAID 10 in complete layman's terms?

    - by Hank
    I am a newbie in the world of storage and I am having a hard time digesting the physical architecture of some of the RAID levels. I am particularly interested in RAID 10, and 50. I asked the question specifically about RAID 10, because I feel if I understand that, I'll understand the other. So, I get the definition of RAID 10 - "minimum 4 disks, a striped array whose segments are mirrored". If I've got 4 disks and Disks 1 and 2 are a mirrored pair, and Disks 3 and 4 are a mirrored pair - where does the data get striped? Thanks.

    Read the article

  • How to configure HA iSCSI for Solaris 10

    - by Noah
    BACKGROUND: We have a StarWind NAS that we are currently using for High Availability storage with our Windows network. Starwind has mirrored drives and multiple ip paths, that the Windows Server combines into one HA disk store. QUESTION: How do I accomplish the same thing under Solaris 10? I've looked at ZFS but to document seems to indicate that ZFS wants to do its own raid/mirroring. I can also attach via iSCSI from Solaris and am presented with both drives being served by the Starwind NS. So, how do I configure solaris so that disk M1 and M2 are considered as a single fault tolerant drive?

    Read the article

  • glusterfs to replicate files to other servers

    - by sbrattla
    I've got multiple servers which all need to have the same content in /home. In other words, if the file /home/user1/test.txt is updated on server A, this needs to be replicated to all other servers in the cluster. Is it possible to use GlusterFS for this purpose? That is, let each server have a full copy of all data locally - which that server will be working on - and solely use GlusterFS to take care of replicating this data to the other servers? I'm not intersted in a combined storage, but rather have all data on all machines only to have GlusterFS to replicate it to the other machines.

    Read the article

  • Choosing Truecrypt volume names and keyfile names

    - by Howiecamp
    Any recommendations on what to name Truecrypt volumes (container files) and where to locate them? Certainly a name like "this is a truecrypt volume.tc" isn't a good idea. Any recommended storage locations? Same question for keyfiles that are generated with Truecrypt. Finally, lets say you choose an existing file, ymca.mp3, as your keyfile. Given that that file is innocuous and normal looking, isn't it easy to forget that's your key file so when you get sick of the Village People and delete the song you're hosed?

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • File Upload drops with no reason

    - by sufoid
    Hallo I want to make an file upload. The script should take the image, resize it and upload it. But it seems that there is any unknown to me error in the upload. Here the code define ("MAX_SIZE","2000"); // maximum size for uploaded images define ("WIDTH","107"); // width of thumbnail define ("HEIGHT","107"); // alternative height of thumbnail (portrait 107x80) define ("WIDTH2","600"); // width of (compressed) photo define ("HEIGHT2","600"); // alternative height of (compressed) photo (portrait 600x450) if (isset($_POST['Submit'])) { // iterate thorugh all upload fields foreach ($_FILES as $key => $value) { //read name of user-file $image = $_FILES[$key]['name']; // if it is not empty if ($image) { $filename = stripslashes($_FILES[$key]['name']); // get original name of file from clients machine $extension = getExtension($filename); // get extension of file in lower case format $extension = strtolower($extension); // if extension not known, output error // otherwise continue if (($extension != "jpg") && ($extension != "jpeg") && ($extension != "png") && ($extension != "gif")) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Unbekannter Dateityp: Es können nur Dateien vom Typ .gif, .jpg oder .png hochgeladen werden.</div>'; } else { // get size of image in bytes // $_FILES[\'image\'][\'tmp_name\'] >> temporary filename of file in which the uploaded file was stored on server $size = getimagesize($_FILES[$key]['tmp_name']); $sizekb = filesize($_FILES[$key]['tmp_name']); // if image size exceeds defined maximum size, output error // otherwise continue if ($sizekb > MAX_SIZE*1024) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Die Datei konnte nicht hochgeladen werden: die Dateigröße überschreitet das Limit von 2MB.</div>'; } else { $rand = md5(rand() * time()); // create random file name $image_name = $rand.'.'.$extension; // unique name (random number) // new name contains full path of storage location (images folder) $consname = "photos/".$image_name; // path to big image $consname2 = "photos/thumbs/".$image_name; // path to thumbnail $copied = copy($_FILES[$key]['tmp_name'], $consname); $copied = copy($_FILES[$key]['tmp_name'], $consname2); $sql="INSERT INTO photos (galery_id, photo, thumb) VALUES (". $id .", '$consname', '$consname2')" or die(mysql_error()); $query = mysql_query($sql) or die(mysql_error()); // if image hasnt been uploaded successfully, output error // otherwise continue if (!$copied) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Die Datei konnte nicht hochgeladen werden.</div>'; } else { $thumb_name = $consname2; // path for thumbnail for creation & storage // call to function: create thumbnail // parameters: image name, thumbnail name, specified width and height $thumb = make_thumb($consname,$thumb_name,WIDTH,HEIGHT); $thumb = make_thumb($consname,$consname,WIDTH2,HEIGHT2); } } } } } // current image could be uploaded successfully echo '<div class="success">'. $success .' Foto(s) erfolgreich hochgeladen!</div>'; showForm(); // call to function: create upload form }

    Read the article

  • Symmetrix gatekeepers on Solaris 10

    - by Milner
    I have some Solaris machines that are connected to EMC Symmetrix for SAN storage. Apparently the Symm has a gatekeeper device that is used with the symmetrix CLI. We don't need the CLI, but I have these gatekeeper devices that constantly fill /var/adm/messages and the like with corrupt label errors. Is there anything I can do (short of deleting the devices on machine start) to get rid of them? Or should I just try to get our SAN guy to get the installer for the CLI? These things are getting annoying, and the devfsadmd daemon keeps rediscovering them on boot.

    Read the article

  • Best way to convert from IMAP to POP3?

    - by Brad
    At work, I connect to a corporate Exchange server via IMAP and Thunderbird 3. Over the course of a year or so, I've created quite a few folders on the server and have a lot of mail stored there. I'm hitting the storage limit of my mail account and want to convert to pulling mail down to my local box (running Linux) via POP3. I know that polling mail will only get mail in INBOX, but I'm wondering if there are solutions out there that could be used to pull mail from the other folders as well, or am I doomed to moving mail into the inbox manually and polling over and over again?

    Read the article

  • How do you passthrough native SATA drives to a guest on ESXi?

    - by John
    I have ESXi 4.0 running on an Intel DX58S0 Mothboardboard with an Intel Core i7 930 processor. VT-d is also enabled. I have three drives in the system, drive 0 is used for ESXi. Drive 1 and 2 contain data from an older machine and show up under the "Storage Adapters" section in configuration. I would like to allow a guest machine to access the data on these drives (as nativly as possible). I have enabled passthrough of the motherboard's built in SATA controller (Intel/Marvell 88SE6121 ). This controller shows up in my guest OS, but the guest shows no drives aside from the normal virtual drive. I have tried a Linux guest and Windows7. I have also configured the host machine to try IDE/RAID/ACHI modes for the SATA controller. Any ideas how I can configure one of my guests to get at the raw data on these drives?

    Read the article

  • Transfer many Gigabytes between two servers

    - by Bernhard
    Hello, I have a big problem. I have to move data from an old Webspace which is only accessibla by ftp. The new root server is accessible by ssh of course :-) I need to move all the data from the old space but the amount is just huge. Is there a way to move all the files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but it didn't work. I think I´ve used the wrong commands. Is there a way to do this? Thank you in advance Bernhard

    Read the article

  • Ways to increase my Ubuntu partition space

    - by Andreas Grech
    I am currently running Ubuntu and Windows 7 as dual-boot on a single HD. The problem is that when I installed Ubuntu, I didn't allocate as much space as I thought I would need and now I need 'reinstall' Ubuntu so that I can increase the amount of storage space. Now there are two ways to go about this. Either I use use gparted to increase my partition space (but I read that it's not really that safe as regards data loss) or create the new partition with more space and reinstall Ubuntu there. But if want to reinstall Ubuntu, is there a way I can somehow "save" my current Ubuntu and install that one? What I mean is that I don't want to lose my current installed packages and files that I have on this partition. Is there a way to kind of maybe 'streamline' my current Ubuntu so that I install this one on the new partition? If not, what are your opinions as regards gparted?

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

  • Distribute terrabyte files to the public from web server

    - by MarkJ
    Hi We need to set up a website which makes two or three large files publicly available - the files will be 1 or 2 terrabytes each. Although they will be public, in practise I expect only a relatively small number of scientists will want to download them. What is the best way to allow this? I've had a quick talk to a web-hosting provider (rackspace) and they suggested a hybrid solution. An entry-level managed server (we predict fairly low traffic for the website, but we do need to install some custom CGI software). Some cloud storage which hooks into Limelight Networks. This would host the large files, for download by FTP. It sounded OK to me but I know relatively little about server administration. Does it make sense? Thanks in advance, Mark

    Read the article

  • Anyway I can trick Carbonite into backing up an external hard drive?

    - by Brian
    I use carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 gig) so I went out and purchased an external hard drive. However, Carbonite will not back it up. I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • Command-line access for Apple Time Machine?

    - by Stefan Lasiewski
    We use Apple's Time Machine to back up our workstations at the office. If I want to restore a file, I need to open up the Time Machine GUI and browse files there. The GUI is ugly eye-candy and gets in my way. Is there a way to browse the Time Machine archive using the Mac's command-line? I'm used to Netapps and other storage appliances. I use backintime for my Ubuntu workstation. To restore a file with one of those systems, you can restore a file with a simple command like: cp .snapshot/daily.0/filename.txt . or cp /backup/backintime/20100611-000002/backup/etc/shadow /etc/shadow Is there an equivalent for Apple's Time Machine?

    Read the article

  • Mounting root failed. Dropping into basic maintenance shell

    - by vmsystem
    Hi, I have purchased AMD Phenom X4 955 3.2GHZ processor with supporting gigabyte GA-MA785GM-US2H mother board / 6GB DDR2 RAM / 500GB SATA drive for learning Vmware ESX 3.5 product. In the above configuration, I have installed windows xp 64 bit operating systems and continue to installed vmware workstation 6.5. From the VM workstation, I can able to install ESX3.5 update2, but I unable to start properly, please refer the below mention error. “Mounting root failed. Dropping into basic maintenance shell. To collect logs for VMware, connect a USB storage device and run 'bin/vm-support '. Machine will be rebooted when you exit from this shell.” The same was tested in the windows 2003 Enterprise Edition server / windows 7 32bit / windows 7 64bit also, Please help me to resolve the issue.

    Read the article

  • Serverlocation moved and how can I Move the files

    - by Bernhard
    Hello together, I´ve a big problem. I have to move data from an old Webspace which is only accessibla by ftp. No we have a new root server which is accessible by ssh of course :-) No i Need to move all data from the old space but there is a lot of Gb of files. Is there a way to fetch all files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but without success. I think I´ve used the wrong commands. Is there a way to etablish something like this including all files and directorys? Thank you in advance Bernhard

    Read the article

  • Dell Powervault MD3000 - Not sharing Files between servers

    - by Kevin
    I'm a developer who has to set up a Dell Powervault MD3000 due to lack of resources. I have connected the Powervault to 2 Dell 2950 servers via the SAS cables. I performed the setup using Dell's MD Storage Manager software (4 disks, RAID 5 with hot spare). Then I added the disks using Windows 2003 disk management (Basic, not dynamic disk and formatted with NTFS). When I add files to the array from one server, they are not visible on the other server (and vice-versa). Is the error in the windows disk management configuration?

    Read the article

  • Completely replacing (upgrading) a RAID 5 array of disks on an ESXi server

    - by jshin47
    I have a development server that runs several VM on ESXi 5. It has an array of disks in the RAID 5 configuration where all of the disks are currently the same size. I would like to expand storage on this box greatly, but I am not sure what the smartest way to go about this would be. My current plan is to: Turn off all VM Copy VM folders from server to another location Verify that I can mount all the VM on the new location (ie that the copy went ok) Replace all the disks with new, bigger ones Reinstall ESXi5 Copy the VM back over This seems like it might take a while to accomplish and is not terribly slick, especially since I will have to reconfigure ESXi 5, but is there a smarter alternative?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >