Search Results

Search found 9318 results on 373 pages for 'rescue disk'.

Page 327/373 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • Why can't I physically access my machine after a remote session?

    - by Steve Crane
    I have a Dell Optiplex 960 desktop running Windows 7 64-bit at work. I typically leave it locked rather than logged off when I go home, so that I'm able to remote in from home and continue working if I wish. This is where the problem comes in. If I don't remote in there is no problem and I can simply unlock the next morning. It's when I do remote in that I have a problem. Remote sessions work as expected but when I get to work the next morning the machine appears to have gone into a sleep or hibernate state, from which no amount of mouse moving or keyboard pounding will wake it. The machine is not hanging as remote sessions to it are still possible; it seems that physical access from it's own mouse and keyboard are lost. The only way to gain access is to press and hold the power switch for several seconds until the machine shuts down. Of course this means Windows does not gracefully shut down and after powering up it takes several minutes for the machine to boot and reach the login prompt; presumably while it checks the disk. Has anyone else seen something like this?

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • How to organise storage for media content such as video and music?

    - by thor
    Currently, we have a single server hosting all content: music, video and software. This content is downloaded by users through HTTP. Now free space is coming to an end and we are exploring different ways of extending our storage capacity. We want to do it cheap, simple and reliable (protected from disk/ server faults). Currenly, we see two ways: Add a couple of cheap servers with 4 disks (RAID1 ?), run some distributed file-system on top, like GlusterFS. Pros: hopefully, we will see all our disks as single flat file system, just dump content into it and be done. Cons: could be tricky in configuration and handling of faults. Add a couple of cheap servers, all running HTTP servers. Each piece of content (be it a music file or video) is placed on randomly selected two servers. Pros: don't have to deal with RAID, as content is duplicated; single server failure does not bring down any part of content; doubled distribution capacity (as any signle file could be downloaded from any of two servers hosting it). Cons: requires some scripting on part of distribution of content, adding/ removing servers. Do we miss any other ways? Which of the aforementioned options seems to be the best?

    Read the article

  • How is made sure magnetic or electric fields from devices like transformers or fans close nearby do

    - by matnagel
    Fans and transformers which are inside the server case create magnetic and electric fields. Electric fields can be easily shielded, but what about magnetic fields, they can only be shielded with high cost materials like mu metal http://en.wikipedia.org/wiki/Mu-metal If a hard drive is installed too close to an intense transformer field, how is the magnetically stored information on the ferromagnetic surfaces of the disk kept safe? Even if drives are shielded, where are the limits? Is there some technical investigation or recommendation from manufacturers about this? (I never heard about something and never had any problem but I am interested in some facts. This is much preferred over what you believe or a habit you developed. Please try to give some solid infromation.) I have built and repaired many servers and sometimes I did put the harddrive on top of the power supply. Edit: This question is not about frequencies that could affect the drive via the power or data connectors of the drive, those are electronically decoupled and that's another question. Edit 2: The wikipedia page states that the motor inside the drive is shielded with mu metal. It is obvious that manufactureres have to take care of this. This question is about such influences from outside the drive.

    Read the article

  • I have enabled hidden administrator in Win 7 home, but programs still dont work.

    - by Angela
    I have Windows 7 Home Premium, and would like to do some maintenance tasks such as running Disk Defragmenter. However, this and other programs and applications that I'm accustomed to using are now blocked. For these programs, there is a shield icon next to their icons and nothing happens when I click on them. I notice that the screen blinks slightly, but I do not get prompted for a password and the program still does not run. It seems these programs may only be accessible through an Administrator account. However, right-clicking and selecting "Run As Administrator" does not work. After some research, I found a way to enable the hidden built-in Administrator account. I booted the computer into safe mode. In the command prompt, I typed net user administrator /active:yes. I gave the account a password. I rebooted the system. There is now an Administrator account on the home screen. However, the locked programs behave no differently for me when I use this account. What could cause this problem? How can I fix it?

    Read the article

  • RTorrent stops my torrents, crashes, and I have to manually re-add torrents and start them. How can I stop this cycle of doom?

    - by meder
    I cannot use transmission which is the best torrent client because it's banned from one of the trackers I use, so I am forced to use rtorrent. Normally I am all for command-line programs, however rtorrent ( 0.8.6/0.12.6 ) is simply frustrating. It is not intuitive, imo. I have 400 MB left on the HD and that's more than enough to dl this 200 MB avi. Rtorrent stops the download, though. It says [CLOSED] near the torrent. I do ctrl-r and that invokes the local hash check, and after that's done rtorrent simply dies ( wtf? ). Afterwards, it gives me rtorrent: TrackerManager::send_later() m_control->set() == DownloadInfo::STOPPED. So that leads me to open rtorrent again, then hit ENTER and /home/meder/file.avi.torrent, down arrow, and ctrl-S. I am looking for multiple things... How can I tell rtorrent to not worry about disk space? Again, it stops the torrent if my HD only has 400 mb when the torrent I'm dling is 200 mb ( there are no other torrents ). Why does ctrl-R fail hard? Why does it cause rtorrent to crash? If #2 is not solvable, can someone provide an easy way to add a torrent and start it, a more efficient method than typing the torrent name, hitting the down arrow, and ctrl-S?

    Read the article

  • Ways to recover data from external hard drive

    - by Howard Benson
    I use an external hard disk for backup of my mac with time machine (OS 10.5.8). I have made something wrong and I have found important folders in the recycler bin. These folders come from external hd. They are backup folders (backups.backupdb) and others. I have tried to restore them draggin and dropping. Some of them came back in the external hd in a while. For the others it takes hours to "preparing to copy" and then it has said "there's no space to copy" on ext hd. It's strange. Files are now in the recycle bin (180gb), and the ext had should have lot of free space. But it isn't really so. Ext hd is not free of space even if these files are in the bin. I ask for advices. I'm not also able to use time machine now (and i have "lost" old backups) for the same reason. Ext hd says that it has not free space.. Thanks

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • What USB key would you recommend using for running a Windows 7 VM off of?

    - by Darryl Hein
    Because I can't find a good PHP editor for OSX, I develop in Windows with PhpEd. At the moment, my development time is split between a desktop and a laptop. To partially solve the problem of having 2 different environments, I have installed a virtual machine (through Virtual Box) and put the hard drive file on an external hard drive. At the moment, I've been connecting it through Firewire 800. I have 2 problems with this setup: (1) The hard drive is fairly large so to carry the laptop and hard drive I pretty much require a backpack. (2) The hard drive requires quite a bit of power and therefore reduces the battery life (by about 40%). My thought is to move the VM hard drive onto a USB key. I realize it will be slower, but as I'm just using it for PHP development, there isn't a lot of disk activity in the VM. The only really intense time is boot up, otherwise, it just about sits idle. Do anyone have any suggestions on a USB key to use for the VM? It would need to a minimum of 32GB.

    Read the article

  • Windows File Access Denied

    - by Tom
    I seem to have a general problem with "access denied on Windows". It manifests itself every time if e.g: My bat file calls a compiler creates a file on disk My bat file renames a file But I also have files downloaded (FireFox) to Windows desktop where Windows is giving me "access denied" if I try delete the file. Tried disable AVG + make exception in AVG resident shield (I have tried checking with Task Manager + Winternals process explorer that it is not process running still running that should cause the locks.) Windows 7. My user account is an administrator. All files are created by same user account. The problem is recent, but some things I first noticed yesterday (when I started calling .bat files again which I have used for many years) I have tried: Starting e.g. Windows Explorer with "run as administrator", but that makes no difference right-click - properties - security and changes permissions/ownership (I also get "access denied" when trying this so this does not help) Here is a ascreenshot if I try change security of a "locked" file. (The problem here is the locking occurs continously every time the file is created) ! If I click on, it states I am not the owner? Which baffles me as I just created it. (Yes, through a .bat file calling executables that create the file. But all running under my administrator user account. Interestingly after having this dialog open, the file somehow sometimes suddenly seem to allow me delete it)

    Read the article

  • Failing to load rootfs: Ubuntu 10 + grub2 + rootfs ext4 w/ RAID1

    - by James
    I am having problems booting a new Ubuntu 10 (server) install. My primary HD (/dev/sda) is laid out as follows: Device Boot Start End Blocks Id System /dev/sda1 * 1 18 144553+ 83 Linux <-- /BOOT /dev/sda2 19 182401 1464991447+ 5 Extended /dev/sda5 19 2207 17583111 fd Linux raid autodetect /dev/sda6 2208 11934 78132096 fd Linux raid autodetect <-- / (ROOTFS) /dev/sda7 11935 182401 1369276146 fd Linux raid autodetect The rootfs is part of a RAID1 (software) array (currently degraded): # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sda6[1] 78132032 blocks [2/1] [_U] The UUIDs for the partitions are as follows: # blkid /dev/sda1 /dev/sda1: UUID="b25dd301-41b9-4f4d-9b0a-0e31713dd74c" TYPE="ext2" # blkid /dev/sda6 /dev/sda6: UUID="af7b9ede-fa53-c0c1-74be-31ec752c5cd5" TYPE="linux_raid_member" # blkid /dev/md2 /dev/md2: UUID="a0602d42-6855-482f-870c-6f6ecdcdae3f" TYPE="ext4" Finally, I have my grub2 menuentry setup as follows: ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-25-server' --class ubuntu --class gnu-linux --class gnu --class os { insmod ext2 insmod raid insmod mdraid set root='(hd0,1)' search --no-floppy --fs-uuid --set b25dd301-41b9-4f4d-9b0a-0e31713dd74c linux /vmlinuz-2.6.32-25-server root=UUID=a0602d42-6855-482f-870c-6f6ecdcdae3f ro nosplash noplymouth initrd /initrd.img-2.6.32-25-server } When I attempt to boot, grub loads OK, however I eventually get the following error message: Gave up waiting for root device. ALERT /dev/disk/by-uuid/a0602d42-6855-482f-870c-6f6ecdcdae3f does not exist. Dropping to a shell! If from the grub bootloader I open a grub command line, I can ls (hd0,) and it lists the correct partitions with the UUIDs as shown above - sda6 shows 'a0602d42-6855-482f-870c-6f6ecdcdae3f' (the RAID UUID). If I ls (md2)/ it properly lists all the files on the RAID1 filesystem (ext4) so it doesn't appear to be an issue accessing the raid device. Does anyone have any suggestions as to what the problem might be? I can't figure this one out.

    Read the article

  • Getting access to physical drives in ESXi v5.5 installation on Dell PowerEdge R710 with PERC 6/i

    - by Big-Blue
    I've acquired a Dell PowerEdge R710 server a few days ago, which includes a PERC 6/i RAID controller. The server is now fitted with a SATA SSD, one SAS drive and four SATA HDD's, all of which I would like to be passed through to ESXi in an "as-is" state, without creating any logical drives in the RAID controller. Now, the ESXi v5.5 installation image I grabbed from the Dell homepage starts just fine but only lists the logical drives and connected flash drives as possible installation targets, not any of the physical drives. If I create a small logical drive on my SSD (which the PERC 6/i detects as SATA-SSD type), the ESXi install wizard lists the SSD value on that drive as false; which is far from optimal. I have also tried disabling the RAID controller entirely in the setup, but that also did not help. Everything that should enable passthrough is enabled in BIOS, but that shouldn't be a concern at this early stage of the ESXi installation. How would I be able to install ESXi v5.5 to a part of my SSD that is connected to the storage controller, while giving it entire physical access to the disk (to allow for SMART values to be read etc.)?

    Read the article

  • BIOS not detecting working SATA hard drive

    - by user28927
    Some time ago my power supply died. It's a long story from then till now, but the important bit is that I ended up with a new hard drive and a new power supply. I tested to see if my original hard drive was still alive, and it booted and worked perfectly until I turned it off. When I started it again it would not boot. I bought new SATA cables, assuming that the one I had was not seating properly (it was cheap and wobbly), but no dice. Upon start-up I am presented with a message telling me to insert boot media into the selected drive or add a drive and restart. Neither the new or the old drive is detected by BIOS, my Vista install disk, or from my bootable Linux USB drive. When I remove all of the RAM the computer ceases outputting visual information, and upon reinstalling the ram and starting up again gives me a "failed overclock" error. So, does anyone have an idea as to what might be going on? I'm completely lost at this point.

    Read the article

  • Toshiba Satellite P755D USB 3.0 Drivers Missing - Windows 7 Professional

    - by nicorellius
    I bought a Toshiba Satellite P755D recently and installed Windows 7 Professional on the machine. It runs great. But I noticed the exclamation point in the yellow triangle icon in the Device Manager next to the Universal Serial Buss (USB) Controller (I'm assuming this is the USB 3.0 controller because mine doesn't recognize devices). Normally, when this kind of thing happens I go to the manufacturer's website and download appropriate drivers and call it a day. But not this time... I browsed to my model and found no driver for the USB 3.0 controller. I tried other HW and Utility drivers, thinking they would be bundled. No luck. I tried looking up the motherboard in my machine. Generic name, no luck. I then called Toshiba technical support and they tried basic troubleshooting, eg, uninstall device, reboot, for auto-installation; no luck. I popped the Windows 7 disk back in and tried to get information that way, no luck. Finally, the technical support guy said he would look into the engineer's system to see if there was a specific driver available and that's where I'm at. The technician told me that these USB 3.0 drivers come within the native driver pack in windows but that doesn't seem to be the case. Any ideas? EDIT - See attached screen shots.

    Read the article

  • Why does my PC successfully boot only when unplugged for more than a few minutes?

    - by philg
    I have an HP Pavilion Elite desktop computer, model HPE-490t. I like it because it didn’t cost too much, boots itself from an SSD, came with 16 GB of RAM, and has 6 CPU cores for editing video and camera RAW images. It has one behavioral quirk that I cannot explain, however. The recent power interruptions here in the Northeast got the machine into a state where it could not be restarted. It would power up for a second or two, shut down, and then power up again, never being able to get to the point of showing anything on the monitor. I unplugged it for about 10 seconds and plugged it back in. Same behavior (fails to boot). I unplugged it and walked away for an hour, then plugged it back in and it worked perfectly! I think something similar happened after installing a second hard disk drive into this machine. So the question is why does the computer behave differently depending on how long it has been unplugged? Where is energy stored that affects the machine’s ability to boot? Capacitors in the power supply? Battery on the motherboard (there is one for the clock, but that wouldn’t be exhausted by being unplugged for an hour, I don’t think)?

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • How to access previous VHD versions of system backup?

    - by feklee
    Quote from the 31 Oct 2009 TechNet article "Learn more about system image backup": During the first backup, the backup engine scans the source drive and copies only blocks that contain data into a .vhd file stored on the target, creating a compact view of the source drive. The next time a system image is created, only new and changed data is written to the .vhd file, and old data on the same block is moved out of the VHD and into the shadow copy storage area. Volume Shadow Copy Service is used to compute the changed data between backups, as well as to handle the process of moving the old data out to the shadow copy area on the target. This approach makes the backup fast (since only changed blocks are backed up) and efficient (since data is stored in a compact manner). When restoring the image, blocks will be restored to their original locations on the source disk. If you want to restore from an older backup, the engine reads from the shadow copy area and restores the appropriate blocks. For the last days, a daily system backup of drive C: to drive E: has been scheduled and run by Windows 7 Backup and Restore. Drive C: currently holds 233 GB of data, which fits comfortably on drive E:, a 1 TB drive, with 727 GB of free space remaining. How do I access the previous version of a VHD? I right clicked on files and folders in E:\WindowsImageBackup, and I looked for Previous Versions but always: There are no previous versions available

    Read the article

  • external drive and CentOS - Reset high speed USB device number

    - by Phil
    I have 2 external drives (3TB) and both will not work with my centOS Box. Tested them in windows ( different machine ) No problems ( 2.6.32-279.9.1.el6.i686 ) dmesg reports: usb 2-2: new high speed USB device number 3 using ehci_hcd usb 2-2: New USB device found, idVendor=2109, idProduct=0700 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-2: Product: USB 3.0 SATA Bridge usb 2-2: Manufacturer: VIA Labs, Inc. usb 2-2: SerialNumber: 0000000000006121 usb 2-2: configuration #1 chosen from 1 choice scsi6 : SCSI emulation for USB Mass Storage devices usb-storage: device found at 3 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access ST3000DM 001-9YN166 CC4B PQ: 0 ANSI: 2 sd 6:0:0:0: Attached scsi generic sg3 type 0 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] 5860533165 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 6:0:0:0: [sdd] Write Protect is off sd 6:0:0:0: [sdd] Mode Sense: 00 06 00 00 sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sdd: sdd1 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Attached SCSI disk Tyring to use cfdisk / fdisk / gdisk or even fdisk -l results in the program hanging and dmesg reports: usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd I have the same 2 drives physically installed in the computer via SATA Any Ideas?

    Read the article

  • Small office network setups

    - by user39822
    I work at a small office and we're overhauling our network setup there. We're a web dev company and at the moment we have 50+ production sites running on the same machine that runs our internal email, which is just plain stupid. We're moving all our client hosting off site and are now looking for something to run our internal office requirement. Below is a brain dump: Equal amount of Mac & PC, about 25 machines in total. We need a central "server" to host files that should be accessible everyone as a "network drive". If possible we'd like to use low cost hardware for this (Mac or Win based). Disk space should be upward of 1TB. Ideally we should also be able to run a small web server on this machine (LAMP stack) to run some planning and billing applications we wrote ourselves. We need some sort of MS Exchange alternative for things like a shared calendar and especially being able to set Out of Office replies. We have one printer that is connected to the network Setup should be something can preferably be managed easily via a graphical interface and NOT require command line skills. Users want to keep using Apple Mail or MS Outlook After a quick google I came across the Zimbra collaboration suite, can anyone recommend this or any other solution for our office?

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • How do I stop linux from trying to mount android phone as usb storage?

    - by user1160711
    When I plug in my Motorola Triumph to my fedora 17 linux box USB port, I get an endless series of errors on the linux box as it desperately attempts to mount the phone as a USB drive. Stuff like this: Jun 23 10:26:00 zooty kernel: [528926.714884] end_request: critical target error, dev sdg, sector 4 Jun 23 10:26:00 zooty kernel: [528926.715865] sd 16:0:0:1: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 23 10:26:00 zooty kernel: [528926.715869] sd 16:0:0:1: [sdg] Sense Key : Illegal Request [current] Jun 23 10:26:00 zooty kernel: [528926.715872] sd 16:0:0:1: [sdg] Add. Sense: Invalid field in cdb Jun 23 10:26:00 zooty kernel: [528926.715876] sd 16:0:0:1: [sdg] CDB: Read(10): 28 20 00 00 00 00 00 00 04 00 If I go ahead and tell the phone to allow linux to mount the USB storage, the messages stop, and I get a mounted drive, but if all I want to do is use the debug bridge, my log on linux will continue to fill with this junk. Is there some udev magic I can do to make the system ignore this particular device as far as usb storage goes? I just noticed that if I tell the phone to enable USB storage, let linux recognize the new disk, then tell the phone to disable USB storage again, I get one additional log message about capacity changing to zero, but the endless spew of messages stops, so I guess one work around is to enable and disable USB right away.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >