Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 321/367 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Move EFI System Partition to another drive

    - by Pincopallino
    I had a Windows 8 installation on an HDD, using UEFI as boot. The HDD has the following GPT table: DISKPART> list partition Partizione ### Tipo Dim. Offset --------------- ---------------- ------- ------- Partizione 1 Ripristino 300 Mb 1024 Kb Partizione 2 Sistema 100 Mb 301 Mb Partizione 3 Riservato 128 Mb 401 Mb Partizione 4 Primario 390 Gb 529 Mb Partizione 5 Primario 540 Gb 390 Gb (I apologize it's in Italian, but the translation is quite straightforward). I recently bought an SSD drive, connected it and installed a fresh Windows 8. Now I have a working dual boot, but the UEFI partition is on the HDD instead of the SSD. Here's the SDD partition list: Partizione ### Tipo Dim. Offset --------------- ---------------- ------- ------- Partizione 1 Riservato 128 Mb 1024 Kb Partizione 2 Primario 221 Gb 129 Mb I think that the best solution would be to have it on the SSD for two reasons: the first is performance (I guess it would be a little be faster on the SSD due to the spin up time for an HDD, but I may be wrong about that) second reason is consistency. As I plan to use only the Windows 8 installation that is located on the SSD and I'm probably going to erase the system partition on the HDD to use it as a data storage device, I think that the boot partition should be on the same drive as the OS. So the question is how do I move the EFI System Partition to the SSD?

    Read the article

  • How to safely move where itunes saves music/iphone apps/and meta data to another internal Drive?

    - by GingerLee
    In the past, when I have moved my itunes data from one computer to another, I usually just follow these steps: Copy the contents of two folders: %USERPROFILE%\Music\iTunes %USERPROFILE%\AppData\Roaming\Apple Computer 1) Install iTunes on the new computer, start it and close it (don't let it search for music). 2) Copy all the files in the above folders from old PC to new PC. 3) Start iTunes and authorize the new computer (and deauthorize old one). 4) Before syncing, update all iphone apps to current versions on both my iphone and in itunes. 5) The Sync. The above steps always work for me, and basically Itunes on my new PC works exactly as it did on the old PC. My Question: In the hopes of bybassing the above steps in the future, I would like to just have Itunes use another internal Drive that I use for file storage (e.g. D:/) as the path for the above two directory? Then if I move to new PC again, I could just setup itunes to use the correct path. Is that possible yet with minimal implications? If so how?

    Read the article

  • SSD as primary or secondary drive on a small Linux server?

    - by Alex Martelli
    I'm pensioning off my 10-years-old home server and replacing it with an Ubuntu 10.04 box. The two storage devices are a Western Digital Caviar Green 2.0TB HD and an Intel X25-M 34nm Gen 2 80GB SATA II 2.5inch SSD (the box has 8GB RAM and an i5 750, if it matters). I don't care much about boot times (since I don't plan to reboot all that often;-); the main frequent, performance-demanding task will be (re)building large open source C or C++ software packages from sources (as an open source contributor, I do that often). So, I thought I'd keep the SSD as the secondary drive and the HD as the primary one, using the SSD mostly for the files that can otherwise demand a lot of seeking (esp. in a parallel make). However, the friendly vendor (perhaps more experienced in Windows systems than in Linux ones) thinks the "normal" way to configure the machine would be with the SSD as the primary drive. I'm pretty rusty on configuring and tuning systems, so, I thought I'd better double check on SuperUser... thanks in advance for advice about this choice!

    Read the article

  • Postfix auto create Maildir

    - by Eugene
    I've been beating my head against a wall for a while now on this one. Basically, here is the rundown: Our MX record points to a frontend SMTP server, which contains aliases for actually routing the mail. No alias, no access to the backend storage server, which is what our clients connect to. I'm upgrading the backend email server. Currently, a user is created for every email user on the server, which creates the mailbox. On the new server, everything autheticates through PAM to an LDAP server (all of which is working properly). My goal is to get Postfix to create the Maildir directory for the user automatically. This works fine when I have the /home directory with 777 permissions, but for obvious reasons, this should be avoided. I would like to do this with 775 permissions on /home with a group owner of whatever user Postfix is running as, but I can't seem to figure out what user to use. With the 777 permissions, the /home/$user/Maildir directory is created on message delivery. Does anybody know how I can do this without 777 permissions? The system I am working on is a 64-bit Debian Lenny 5.07 install. Any advice would be appreciated.

    Read the article

  • what type of laptop do I need to run a amd64 or i386 VM?

    - by Frank Schwieterman
    I was running an amd64 build of Ubuntu on a VM on a Windows host which was also amd64. Later I found I could not run the same amd64 iso on my laptop, which is intel without hyper-V. I was confused I thought chipset mattered, but maybe it does not. When buying a PC or Apple, is there anything to check about the chipset to make sure it can run different types of VMs? In my case, I was trying to run ubuntu on a Thinkpad T520. Per answer below, I did need to enable some bios settings. I'm still having some issues. Running ubuntu on virtual box, when I try to use ubuntu-12.10-server-amd64.iso for the CD/DVD device to start a new VM, virtualbox complains "Failed to open the CD/DVD image . Could not get the storage format of the medium (VERR_NOT_SUPPORTED). When I try to use ubuntu-12.10-server-i386.iso the ISO is accepted, but then the VM complains "FATAL: No bootable medium found! System halted." I had been using an amd64 iso on my home PC which is amd64 and it works fine, which is why I suspected CPU mismatch was the problem at first. But it seems like I'm having issues, and maybe this superuser thread can be used to verify the cpu is irrelevant in this case.

    Read the article

  • Dns - wildcard vs. cname subdomains

    - by Matthew
    Alright I have to admit I'm confused with how DNS works. I've always just added things until they worked, and now it's time to learn how they work. So one confusing thing to me is that there's sort of two places I can have records. I have an account with rackspace cloud servers. And then there's the place I registered the domain. But both allow me to edit DNS records. Should I do everything at both places or is one better than the other or am I missing the point? Subdomains confuse me too. I'd like to be able to just have a wildcard subdomain (I've done this in the past.) I just don't like the idea of adding a cname record or A record every time I need a new subdomain. Then I read this and it says: The exact rules for when a wild card will match are specified in RFC 1034, but the rules are neither intuitive nor clearly specified. This has resulted in incompatible implementations and unexpected results when they are used.

    Read the article

  • Disable internal display on Macbook Pro without closed lid mode?

    - by jslaker
    I have an early 2007 Macbook Pro running 10.5 that I've recently set up on a KVM with my primary desktop system. The problem I've run into is that I have a 20" 1680x1050 LCD, and OS X only provides options to mirror at the resolution of the built-in display or to span. Since the built-in display runs at 1440x900, this leads to running my LCD at non-native res and a fuzzy picture. There isn't any option that I can find to simply disable the built-in display entirely and run the external LCD at its native resolution. I am aware of closed lid mode, but the MBP was disassembled while in storage for about 6 months (took it apart to pull the HDD) and the cable to the touchpad, which controls the sleep sensor was damaged, meaning closed lid mode won't work. I've looked into replacing the cable, but the cheapest I've been able to find it is $75-100, and I'm trying not to invest any more money into this computer as it also has a completely dead battery and a few other minor problems. I've found the app SwitchResX which appears to allow you to do what I need, but it has a lot of functionality I don't need and a ~$20 registration charge attached to it. An odd set of circumstances, I'm aware, but I was hoping somebody might know of an OS hack that would let me just disable the internal display and be done with it. :)

    Read the article

  • Exchange emails not delivering for one user

    - by Cylindric
    We have an Exchange infrastructure going through a migration from 2003 SP2 (call it ExOld) to 2010 (ExNew). All users are now on the new server, but mail is still being directed to ExOld until testing is complete. ExNew sends emails directly to the internet. For one particular user, emails don't seem to be being reliably delivered, but the odd thing is that it's not all emails. I can see external emails in his inbox. If I send an internal email it works fine. If I send an email from Gmail to him it doesn't get through. If I telnet from outside to ExOld I can send an email to him. If I telnet from outside to ExNew I can send an email to him. This is a transcript that results in a successful send: 220 ExOldName Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at Mon, 22 Oct 2012 10:55:26 +0100 EHLO test.com 500 5.3.3 Unrecognized command EHLO test.com 250-ExOldFQDN Hello [MyTestExternalIp] 250-TURN 250-SIZE 250-ETRN 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-CHUNKING 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XEXCH50 250 OK MAIL FROM:[email protected] 250 2.1.0 [email protected] OK RCPT TO:[email protected] notify=success,failure 250 2.1.5 [email protected] DATA 354 Start mail input; end with . Subject:Test 1056 Test 10:56 . 250 2.6.0 Queued mail for delivery quit 221 2.0.0 ExOldFQDN Service closing transmission channel Emails go through Symantec Cloud, but their "Track and Trace" shows the messages going through, with a "delivered ok" log entry. 2012-10-22 09:19:56 Connection from: 209.85.212.171 (mail-wi0-f171.google.com) 2012-10-22 09:19:56 Sending server HELO string:mail-wi0-f171.google.com 2012-10-22 09:19:56 Message id:CAE5-_4hzGpY2kXFbzxu7gzEUSj5BAvi+BB5q1Gjb6UUOXOWT3g@mail.gmail.com 2012-10-22 09:19:56 Message reference: 135089759500000177171130001194006 2012-10-22 09:19:56 Sender: [email protected] 2012-10-22 09:19:56 Recipient: [email protected] 2012-10-22 09:20:26 SMTP Status: OK 2012-10-22 09:19:56 Delivery attempt #1 (final) 2012-10-22 09:19:56 Recipient server: ExOldIP (ExOldIP) 2012-10-22 09:19:56 Response: 250 2.6.0 Queued mail for delivery I'm not sure where to look on the old (or new) server for information as to where the mails are ending up.

    Read the article

  • Ubuntu+Win7--disk error press any key to restart

    - by Siddharth
    Apparently,none of the solutions in any other posts and forums worked for me For some reasons I decided to remove ubuntu from my hard disk drive. My partition table(presently): (/dev/sda1) (fat32) 900 MiB ---(MBR,I suppose) (/dev/sda2) (ntfs) 70 GiB -----(Windows 7) (/dev/sda3) (ntfs) 314.88 GiB --(Personal File storage) (/dev/sda4) (ext4) 80 GiB -----(Ubuntu 13.04) (unallocated) -----1.31 MiB So,after moving(cut-paste) everything(for backup) from the fat32 partition using win7..I booted into Ubuntu and copied the remaining 3 files(hidden in Win7 file explorer) --bootmgr,bootsect.bak,and one more which I do not remember.TERRIBLE MISTAKE After this I again booted into Windows and deleted ext4 partition..formatted it to ntfs..and shut down the pc.Then,I put in a Win7 bootable USB..using command prompt I entered bootrec /fixmbr,and bootrec /fixboot.. Restarting showed me the GRUB..choosing windows 7 showed me "Disk Error. Press any key to restart." I also installed a fresh Win7 installation on the 80 GiB partition expecting a Windows Legacy Bootloader with two win7 options..but did not work. Then..I used a Ubuntu LiveUSB to put it back to the present configuration(above) since all methods to restore the MBR failed.. I copied back the fat32 partitions backup files but couldn't copy those 3 files.Somehow ,they had been recreated and were non-replaceable. I do not want to format the win7 partition for a fresh one. I have used boot-repair..Restore MBR option brings back to "Disk error...." without even going through grub..so I reinstalled grub and I'm able to boot into Ubuntu. grub menu shows the win7 option as "Windows 7 (loader) (on /dev/sda1)". paste.ubuntu.com/5753710 paste.ubuntu.com/5775999

    Read the article

  • How to turn off Windows Azure's "This copy of Windows is not genuine" message?

    - by Sid
    Is there any setting/configuration item to avoid Windows Azure from printing that error on the screen or detecting it? I've put a screenshot below that shows the message when you RDP into the web role. My web role runs on Windows Azure Guest OS 1.17 (variant of Windows Server 2008 SP2) Background: I was explaining our architecture to some outside engineers (NDA'd and all) and had to demystify the webrole as they were unfamiliar with Azure. I RDP'd into the VMs running the Web Role when one of their engineers gasped "are you guys running pirated copies of Windows in the cloud?" I also noticed that within the RDP screen, the Azure machines had "This copy of Windows is not genuine" on the bottom left corner. Now obviously, Microsoft is running their own OS in their own datacenter with no influence from me. So no 'piracy' here, despite that obvious warning. However, they seemed so distracted by this ("how can it be? really? hmmm?") that we wasted more time talking about it than the actual matter at work. Like I said, they have little exposure to Azure but have value add elsewhere. I want to get rid of this so I don't have to explain this in the future. PS Microsoft: If you're going to modify Windows Server <XYZ> into Windows Azure <A.B> , you should also modify the code that verifies product integrity.

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • What hardware would I need (approx) to run ESXi server?

    - by mr.b
    Hi, I am considering to purchase off-the-shelf commodity hardware in order to build server that will host virtual machines using ESXi server. Intended purpose for this server is NOT mission critical tasks. It will have to run perhaps 20-50 Windows XP/Vista/7 virtual machines (in total, but closer to 20 figure). Each guest would have to have 1-2 GB of ram, and probably two-three times more disk space than guest OS needs with clean install and all updates applied (that would be around 6-8 GB for XP, and i believe closer to 10-15 for win7). Those guests will act as a test ground for a new product that is network management software, thus guests will idle most of their time once initially loaded, but if I give them some task to complete, they should be able to perform reasonably well. Now, from what I have learned... CPU is usually not much of an issue (6 cores would do it), memory should not be lacking, but doesn't have to be sum of all guests, because of overcommitment... That leads me to IO, which is, as it seems, the bottleneck. Since I have very little experience with ESXi (and ESX, too) server, I'd like to ask: How much memory could I save by overcommitment, and how does it affect performance? Is 6-core cpu enough to run above described system? Would it be possible to run entire server off two (or even one) SSD drives (to host system virtual disks, with few additional HDDs (2-3) in RAID 0 to be used as secondary storage? I read somewhere that ESXi allows having something like "master image", essentially virtual machine that is "deployed" many times, so that disk space can be saved by having only differences stored by specific guests, instead of copying around whole virtual disks. Is this true, and how can this help me? Are there any other things I need to take into consideration when building this off-the-shelf solution? I should probably mention here that I'm fully aware of issues like SPOF regarding power supply, raid 0, etc, but since it's only a testing ground and not a production system, it's not so important for me. Thanks, B.

    Read the article

  • Half of installed RAM is hardware reserved

    - by user968270
    After a rather arduous and convoluted series of problems that left me without a desktop for ~80 days, I've finally got the thing up and running, having replaced the power supply, motherboard, graphics card and CPU. Now, however, I'm experiencing the 'hardware reserved RAM' issue. Perhaps this is the exhaustion talking, but looking at the question that tends to get pointed to when this kind of topic gets locked as a duplicate hasn't helped. I have 16 GB of RAM installed in an MSi 970A-G46, which is spec'd for up to 32 GB of RAM. The BIOS recognizes that I have 16 GB installed, and the resource monitor also shows the whole 16 GB, only it shows 8 GB as hardware reserved. I've seen suggestions that it's an OS issue, but the particular installation of Windows 7 (64-bit) which I'm running on my boot drive is the same as the one that could actually access the 16 GB in my previous motherboard (MSi 870A-G54). I've updated my BIOS using the MSi Live Update tool and restarted the machine with no effect, and I cannot seem to locate any 'Memory Remapping' option as I've seen mentioned. I've physically swapped the RAM between the slots to no effect. I've unchecked the Maximum Memory box in the msconfig Boot tab's advanced options, also to no effect. These are my system's basic specifications OS: Windows 7 Home Premium (64-Bit) Motherboard: MSi 970A-G46 CPU: AMD FX-8150 Graphics Card: XFX Radeon HD 6870 Boot Drive: OCZ Agility 3 Storage Drive: Samsung Spinpoint F3 ST1000DM005/HD103SJ 1TB PSU: Thermaltake TR-2 TR600 600W ATX12V v2.3

    Read the article

  • What diagnostics are safe to run on an SSD drive?

    - by Peter Mounce
    I have a MacBook Pro (late 2010) with a Crucial RealSSD 256Gb in it; 60Gb is given to the Windows 7 x64 BootCamp partition. I have a USB-attached 500Gb drive for (most) data. In the last day or so, I've had a BSOD and several OS freezes (both Mac OSX 10.6.6 and Win7). The system in both cases will boot fine (at the moment!) and then run things fine, then some time later a program will stop responding, followed shortly thereafter by the system as a whole, forcing a reboot. This smacks to me of a storage problem. Given that I have an SSD and not a regular magnetic HDD, what are my next steps, in both OS'? I haven't seen anything pertinent in Windows' event-log. I'm not sure of the equivalent place to look in OSX; it's never given me issue to find out. What are my options for attempting to save my data from the SSD to another drive, given that after some small amount of time (eg half an hour), the OS stops responding? What are the recommended next steps?

    Read the article

  • Software raid 0 with six disks performance

    - by user134880
    I have some problems with disk performance. I have 6 x WD 500Gb RE4 disks. Each disk gives 135Mb/sec throughput. All measurements are made with hdparm with options "-tT" (I know that it is just synthetic test, but I need some start point to make measurements). I have controller with Sil3124 x 4 ports PCI Express 1x So... RAID0 on controller with 2 disks gives 200Mb/s - ok, pcie limit. RAID0 on motherboard with 2 disks gives 270Mb/s - niceeee :) RAID0 on contorller with 4 disks gives 200Mb/s - ok, pcie limit. RAID0 on controller with 4 disks + 1 disks on motherboard = 340Mb/s ... :( RAID0 on controller with 4 disks + 2 disks on motherboard = 300Mb/s .... why? Any ideas? Maybe need more cpu power? Now there is Pentium D Dual core 2.8Ghz, 4Gb RAM. It is dedicated box for storage.. no other activity.

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • USB Device Not Recognized (Mac)

    - by Nargis
    Fortunately, my Mac-pro also made one of my USB storage devices inoperable. My data loss in that USB device but such as another USB device and USB keyboard are unaffected. I have heard that my friend usually trigger this problem by having at least two devices plugged in - typically thumb drives/USB flash drives, and then once a second flash drive is plugged in that become unrecognized. I have only two USB ports and first I think port loose when I connect two USB devices. But later I found these hidden files (“.Spotlight-V100”, “.TemporaryItems”, “.Trashes”, and “._.Trashes”) are created by Mac OS. And before unrecognized that USB device I have deleted these files and my friend had also done the same action. Now I don’t want to test for next USB device to become unrecognized and I won’t deleted any hidden system file inside the flash drives. But I really want to know why these problems happened. Can I delete these hidden files when I only connect to virtual machine (Vista), because I used to delete all useless hidden files from USB flash drives? Any suggestions or thoughts to prevent this or alternative suggestions to fix the problem that take lossless would be much appreciated.

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Latitude D600 USB port problem

    - by Moab
    Both USB ports stopped communicating on my D600, they have power, my optical mouse still lights up, no device works on the ports, everything is fine in Device manager in Dual boot XP and W7. Checked the bios, not much in there for USB. No usb device shows up when I use the F12 boot device menu either, so it must be some hardware issue. I have another hard drive with Ubuntu on it, popped it in and USB does not communicate with it either. Appears to have 5v but no communication, any Ideas besides another motherboard or USB card for the pcmcia slot (these don't work to well from my research)? I mostly use them for mass storage devices and pcmcia slots don't supply enough power for these devices. Thanks to all who answer with last ditch efforts. I hate to give up on it, its been good to me and still runs rather well for its vintage. EDIT: I did inspect the ports with a flashlight and did a partial disassembly of the laptop in an attempt to check the solder joints, but would require complete motherboard removal to see them, that is where I stopped. .

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • WD Caviar Green Extremely Slow

    - by Steven
    I am encountering a really weird problem on my WD Caviar Green HDD. Well first of all I have 2 HDDs on my Desktop, one 160GB Seagate holding my Win7 Ultimate x64 and the problematic one, WD 1.5 Caviar Green for storage purpose. My problem is kinda weird, when I transfer files from my Seagate(C:) to my WD (D:) the speed is good (50-60MB/s). Then the problem arises when I transfer too "many" large files, the transfer speed would go straight down to kilobytes/s. Well after I cancelled the transfer and access my D:, even entering a folder requires loading for like 10 seconds. Such problem not only arises when I am transferring files to my D:, it seems like my WD can't handle much activities. For instance, last time I installed my game on D: and I would face much lag after playing for some time. When the same game is installed on C: no problem arises. Does anyone knows what is the problem? P/S: There was one temporary solution that I used to tried. After the "situation" occurs, I tried to access as many folders on D: as I can and let it load, repeating such actions and giving it some time bring the D: back to speedy transfer. However, large transfers would causes the situation to happen again. Does it have something to do with cache whatsoever?

    Read the article

  • Azure Virtual Machines - what fault tolerance do they provide?

    - by Borek
    We are thinking about moving our virtual machines (Hyper-V VHDs) to Windows Azure but I haven't found much about what kind of fault tolerance that infrastructure provides. When I run VHD in Azure, I've got two questions: Is my VHD and all the data in it safe? I think that uploaded VHDs use the "Storage" infrastructure so they should be automatically replicated to multiple disks and geographically distributed but should I still make a full-image backup just to be safe? (Note that of course I will be backing up the actual data inside VMs that I care about; I just want to know if there is a chance greater than 0.0000001% that one day I will receive an email from Microsoft telling me that my VM is gone and that I should create or restore it from scratch). Do I need to worry about other things regarding the availability of my VMs? I mean, when I have an on-premise server I need to worry about the hardware itself, about the host operating system, what would happen if my router failed, if my Hyper-V's C: drive failed etc. Am I right in thinking that with Azure, their infrastructure takes care of all of this? Thanks.

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • Compatibility of Fedora install on a Hybrid drive

    - by kjh
    I recently bought un ultrabook with a 500gb/32gb sdd hdd hybrid drive, and I'm having trouble replacing windows on it with fedora seventeen. it errors out saying there was an unhandled exception. Is linux compatible with hybrid drives? or can the operating system on a hybrid drive not be replaced? Edit: here are the steps I select special storage devices because it ignores my hard drives otherwise at this point i get the message: "Disk contains bios raid meta data, disk sda will be ignored" I can pick a hostname, select my timezone and set a password at the install type screen, no matter what I select (use all free space, replace linux systems, create custom partition etc..) once I click next, it says "an unhandled exception" has occured. and I can no longer proceed with installation. Here is the error message: anaconda 17.29 exception report Traceback (most recent call first); File "/usr/lib64/python2.7/size-packages/pyanaconda/bootloader.py"; line 183 self.stage1_drive=self_drives[0] File "/usr/lib64/python2.7/site-packages/pyanaconda/rw/cleardisks_gui.ph"; line... and tons of more lines like that

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >