Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 617/714 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • Why does Windows 7 have three system partitions?

    - by Ben
    I am using Windows 7, and I wanted to make a System image (using Windows 7), but Windows 7 checked three partitions as System (100 MB + C (install partition) + D (my partition for my files, all programs are installed at C)). I don't want to backup my D partition, but that is not really the point. I don't want Windows messing with my other partitions and making them system. Is there a way to limit Windows 7 just to partition C (install partition)? If there is no way to stop Windows from making other partitions system, can I at least delete the files that make partition D system? PS: All these three partitions are on one physical disk, partitions from other disks aren't treated as System. FACTS: desktop PC, no OEM partitions, I personally have installed Windows 7 (many times) on the C partition. Why is my D partition checked as System partition when I try to create a System Image (using Windows 7 Ultimate built in tool), even though Windows (and all the software) are installed on the C partition? Is there a way to make D "normal" or non-system partition? Here is a picture of how it looks like if I try to create a system image. Once again, why is D also a system partition?

    Read the article

  • iptables to block VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Windows 8 x64 with VMWare Workstation or inside ESXi

    - by Dommer
    I need to run several virtual machines on a core i7-920 box with 12GB or RAM and a 256GB SSD to host the VMs. It also has a Highpoint RocketRaid 2720SGL RAID controller with a 12TB RAID 5 array. I want one of my VMs to run Windows 8 x64, to have access to the RAID array as a native disk (not as networked drives and it needs to run at full speed) and to be able to send files quickly across the network. Initially I thought I'd try to do this using ESXi 5, but I have been unable to find any working RAID drivers for the RR2720SGL and it is not on the HCL for ESXi 5. In light of this, I have installed Windows 8 x64 on the hardware and am thinking of installing VMWare Workstation and running my VMs inside there. I guess my questions are these: How does VMWare Workstation 9 perform compared to ESXi 5? In the real world I mean? Presumably installing Win 8 as the host OS will give me way better performance for that Win 8 machine than Win 8 running under ESXi? I should stick with Windows 8 x64 as the host OS, right? If I install a domain controller VM inside my Win 8 box and join the Win 8 machine to that domain, am I insane (I would guess the Win 8 machine wouldn't see the domain controller until it finished starting everything up, but I don't think that matters)?! is it feasible to give metrics like this and if so, what is the likely value of x? 25%? 50%? 75%? Win 8 under ESXi runs x% as fast as Win 8 installed bare metal.

    Read the article

  • Server Recovery from Denial of Service

    - by JMC
    I'm looking at a server that might be misconfigured to handle Denial of Service. The database was knocked offline after the attack, and was unable to restart itself after it failed to restart when the attack subsided. Details of the Attack: The Attacker either intentionally or unintentionally sent 1000's of search queries using the applications search query url within a couple of seconds. It looks like the server was overwhelmed and it caused the database to log this message: Server Specs: 1.5GB of dedicated memory Are there any obvious mis-configurations here that I'm missing? **mysql.log** 121118 20:28:54 mysqld_safe Number of processes running now: 0 121118 20:28:54 mysqld_safe mysqld restarted 121118 20:28:55 [Warning] option 'slow_query_log': boolean value '/var/log/mysqld.slow.log' wasn't recognized. Set to OFF. 121118 20:28:55 [Note] Plugin 'FEDERATED' is disabled. 121118 20:28:55 InnoDB: The InnoDB memory heap is disabled 121118 20:28:55 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121118 20:28:55 InnoDB: Compressed tables use zlib 1.2.3 121118 20:28:55 InnoDB: Using Linux native AIO 121118 20:28:55 InnoDB: Initializing buffer pool, size = 512.0M InnoDB: mmap(549453824 bytes) failed; errno 12 121118 20:28:55 InnoDB: Completed initialization of buffer pool 121118 20:28:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121118 20:28:55 [ERROR] Plugin 'InnoDB' init function returned error. 121118 20:28:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121118 20:28:55 [ERROR] Unknown/unsupported storage engine: InnoDB 121118 20:28:55 [ERROR] Aborting **ulimit -a** core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 13089 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited **httpd.conf** StartServers 10 MinSpareServers 8 MaxSpareServers 12 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 **my.cnf** innodb_buffer_pool_size=512M # Increase Innodb Thread Concurrency = 2 * [numberofCPUs] + 2 innodb_thread_concurrency=4 # Set Table Cache table_cache=512 # Set Query Cache_Size query_cache_size=64M query_cache_limit=2M # A sort buffer is used for optimizing sorting sort_buffer_size=8M # Log slow queries slow_query_log=/var/log/mysqld.slow.log long_query_time=2 #performance_tweak join_buffer_size=2M **php.ini** memory_limit = 128M post_max_size = 8M

    Read the article

  • File Open/Save Dialog always 'Not Responding'

    - by Amanda
    I am aware that this question has been asked once, however the solution for them didn't work with me. Whenever I go to open/save a file in any program, the dialog does not come up, and the application goes to 'Not Responding'. This goes on for about a few minutes, and then stops, but still does not open the dialog. There have been a few occasions where the dialog has suddenly worked for a while, but then the problem comes back. I have tried many solutions given around the internet, I have cleaned it with CCleaner, disk defragged it, sorted the index. Nothing works. Is there anybody who has any idea what the problem is? This is Windows Vista. I'm not quite sure what kind of information you guys would need about my laptop, but I'll give you it if you need it. :) Solutions I have Tried: I have tried deleting the 'Shellicon' folder in the registry, which wasn't even in there. I have looked for mapped network drives lingering around, and I haven't come across any. I have tried rebuilding teh Widnows Search Indexes...no difference.

    Read the article

  • How to make a bootable USB drive out of a bootable DVD or CD

    - by Svish
    Is there a "universal" way of how you can make a bootable USB drive out of a bootable dvd or cd? What makes a USB drive bootable? What makes a dvd and cd bootable? For example there is a program called UNetBootin which can make bootable USB drives, but seems like it only works with various linux distributions. (Tried it with a Win7 image and the SystemRescueCD, which didn't work so well...). Main reason I ask is that I have a Support DVD which came with an Asus EEE, and it of course doesn't have an external dvd drive. So I am curious if I can sort of move that dvd over to a USB drive so that I can use it without buying one. Not asking just specifically about this one case though, I am curious to know a bit more about this in general. So, if you have a general bootable DVD or CD (Or a DVD or CD image for that matter), could be linux distro, windows install disk, support disks, etc., is it possible to "move" it over to a USB drive and make that work like the DVD or CD did? (Being bootable and all).

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • Autounmounting USB keys with FAT filesystem on Linux (RHEL5)

    - by niXar
    For security reasons, I have two workstations i front of me, and I can only transfer data between them through a USB key. As you can imagine, it can get quickly tiresome, but the most annoying is having to unmount the things before removing them. Not umounting them results in missing files most of the time, even if I remove them a while after having last written to them. Now, since they're only used for transferring smallish files, and each are basically written once and read once, I don't need the fancy pansy caching infrastructure that makes clean unmounting a necessary step. And since the data is always a copy of something I have at hand, I don't care if the filesystem croaks from time to time. But anyway the system doesn't need to force that on me, it could simply make sure everything is committed with a second, and works synchronously. Then when I remove the key, nothing is lost. Is there a way to do this? I would appreciate any other tips on handling this situation. Edit: it appears the situation has changed between RHEL5 and Fedora up to F11 on one hand, and F12 on the other. The latter use DeviceKit-disk, and I haven't quite figured out how to do this. The method provided below in gconf does not work anymore.

    Read the article

  • Apache 2.4 with PHP-FPM

    - by tubaguy50035
    I'm trying to setup Apache 2.4 with PHP-FPM 5.4 using the new modules with Apache 2.4. The following is what I have currently in my virtual host file: <VirtualHost *:80> ServerAdmin root@localhost DocumentRoot /var/www #Directory permissions <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Require all granted </Directory> CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I have PHP-FPM running using Unix sockets with a sock file located at /var/run/php5-fpm.sock. How do I proxy my requests to this sock file? I've seen some sites say to use ProxyPassMatch and others are saying Rewrite Rule. Are there pros or cons on either side? Also, most sites I'm seeing are showing ProxyPassMatch with a regex to only pass .php files. Could I also send it .html files? For whatever reason, we have a ton of PHP inside .html files. Edit: As noted in the comments, it looks like mod_proxy_fcgi doesn't support Unix sockets. Is there another module I should be using?

    Read the article

  • How to access previous VHD versions of system backup?

    - by feklee
    Quote from the 31 Oct 2009 TechNet article "Learn more about system image backup": During the first backup, the backup engine scans the source drive and copies only blocks that contain data into a .vhd file stored on the target, creating a compact view of the source drive. The next time a system image is created, only new and changed data is written to the .vhd file, and old data on the same block is moved out of the VHD and into the shadow copy storage area. Volume Shadow Copy Service is used to compute the changed data between backups, as well as to handle the process of moving the old data out to the shadow copy area on the target. This approach makes the backup fast (since only changed blocks are backed up) and efficient (since data is stored in a compact manner). When restoring the image, blocks will be restored to their original locations on the source disk. If you want to restore from an older backup, the engine reads from the shadow copy area and restores the appropriate blocks. For the last days, a daily system backup of drive C: to drive E: has been scheduled and run by Windows 7 Backup and Restore. Drive C: currently holds 233 GB of data, which fits comfortably on drive E:, a 1 TB drive, with 727 GB of free space remaining. How do I access the previous version of a VHD? I right clicked on files and folders in E:\WindowsImageBackup, and I looked for Previous Versions but always: There are no previous versions available

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • Advice: USB Monitoring Programming

    - by Kashif
    I need an advice about USB programming in linux. i have to design a USB monitoring program that 'll keep checking usb ports of a linux cent os. as soon as a usb or external hard disk is connected, this program will shoot an email to some specific person about detail of usb (as size, mount on, time). when usb is disconnected, it will again shoot an email to some person with same kind of information. mean while this program will also write logs in syslog/messages with name of programing for easy tracking. Now I want ask that what is best way to develop this program. as I'm new to this field so i know nothing about it? either i should use perl, bash scripting or some other language? I have no idea what is right way to adopt coz this program will keep running all the time to keep a check on usb ports. I know few commands in like lsusb, fdisk (to check attached usb) and df -h (to get detail of usb) but dont know how i can achieve using these commands that i am thinking. also one more thing that in future i also need to modify this program for ubuntu and Citrix XenServer and it should be same everywhere.

    Read the article

  • Instructions to setup primary and only domain controller

    - by Robert Koritnik
    Where could I get best step by step instructions (with some simple explanations) how to setup domain controller on Windows Server 2008 R2 Server Core? I don't know what do I need? Do I need DNS as well and AD and so on and so forth. I don't know enough about these things, but I need to set them up to prepare development environment. I would also like to know how to configure firewall on DC machine, to make it visible on other machines because I've setup DC somehow but I can't connect to it... This is my HW config: Linksys internet router with DHCP my dev machine is Windows 7 my DC machine is a VM in my dev machine my dev machine has a hw network adapter to linksys and a virtual network adapter to DC DC machine has two network adapters: one to linksys (to be internet connected so it can be updated etc.) and one to host (my dev Win7 machine) Edit My development machine should access domain controller and logon using domain credentials. Development machine would access internet directly via Linksys router. My domain controller machine would only serve authentication (and if I'm able to configure it right) should also have Active Directory Federation Services in a workable condition. I hope this is a bit more clear now. At least a small bit.

    Read the article

  • Getting access to physical drives in ESXi v5.5 installation on Dell PowerEdge R710 with PERC 6/i

    - by Big-Blue
    I've acquired a Dell PowerEdge R710 server a few days ago, which includes a PERC 6/i RAID controller. The server is now fitted with a SATA SSD, one SAS drive and four SATA HDD's, all of which I would like to be passed through to ESXi in an "as-is" state, without creating any logical drives in the RAID controller. Now, the ESXi v5.5 installation image I grabbed from the Dell homepage starts just fine but only lists the logical drives and connected flash drives as possible installation targets, not any of the physical drives. If I create a small logical drive on my SSD (which the PERC 6/i detects as SATA-SSD type), the ESXi install wizard lists the SSD value on that drive as false; which is far from optimal. I have also tried disabling the RAID controller entirely in the setup, but that also did not help. Everything that should enable passthrough is enabled in BIOS, but that shouldn't be a concern at this early stage of the ESXi installation. How would I be able to install ESXi v5.5 to a part of my SSD that is connected to the storage controller, while giving it entire physical access to the disk (to allow for SMART values to be read etc.)?

    Read the article

  • Failing to load rootfs: Ubuntu 10 + grub2 + rootfs ext4 w/ RAID1

    - by James
    I am having problems booting a new Ubuntu 10 (server) install. My primary HD (/dev/sda) is laid out as follows: Device Boot Start End Blocks Id System /dev/sda1 * 1 18 144553+ 83 Linux <-- /BOOT /dev/sda2 19 182401 1464991447+ 5 Extended /dev/sda5 19 2207 17583111 fd Linux raid autodetect /dev/sda6 2208 11934 78132096 fd Linux raid autodetect <-- / (ROOTFS) /dev/sda7 11935 182401 1369276146 fd Linux raid autodetect The rootfs is part of a RAID1 (software) array (currently degraded): # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sda6[1] 78132032 blocks [2/1] [_U] The UUIDs for the partitions are as follows: # blkid /dev/sda1 /dev/sda1: UUID="b25dd301-41b9-4f4d-9b0a-0e31713dd74c" TYPE="ext2" # blkid /dev/sda6 /dev/sda6: UUID="af7b9ede-fa53-c0c1-74be-31ec752c5cd5" TYPE="linux_raid_member" # blkid /dev/md2 /dev/md2: UUID="a0602d42-6855-482f-870c-6f6ecdcdae3f" TYPE="ext4" Finally, I have my grub2 menuentry setup as follows: ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-25-server' --class ubuntu --class gnu-linux --class gnu --class os { insmod ext2 insmod raid insmod mdraid set root='(hd0,1)' search --no-floppy --fs-uuid --set b25dd301-41b9-4f4d-9b0a-0e31713dd74c linux /vmlinuz-2.6.32-25-server root=UUID=a0602d42-6855-482f-870c-6f6ecdcdae3f ro nosplash noplymouth initrd /initrd.img-2.6.32-25-server } When I attempt to boot, grub loads OK, however I eventually get the following error message: Gave up waiting for root device. ALERT /dev/disk/by-uuid/a0602d42-6855-482f-870c-6f6ecdcdae3f does not exist. Dropping to a shell! If from the grub bootloader I open a grub command line, I can ls (hd0,) and it lists the correct partitions with the UUIDs as shown above - sda6 shows 'a0602d42-6855-482f-870c-6f6ecdcdae3f' (the RAID UUID). If I ls (md2)/ it properly lists all the files on the RAID1 filesystem (ext4) so it doesn't appear to be an issue accessing the raid device. Does anyone have any suggestions as to what the problem might be? I can't figure this one out.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Small office network setups

    - by user39822
    I work at a small office and we're overhauling our network setup there. We're a web dev company and at the moment we have 50+ production sites running on the same machine that runs our internal email, which is just plain stupid. We're moving all our client hosting off site and are now looking for something to run our internal office requirement. Below is a brain dump: Equal amount of Mac & PC, about 25 machines in total. We need a central "server" to host files that should be accessible everyone as a "network drive". If possible we'd like to use low cost hardware for this (Mac or Win based). Disk space should be upward of 1TB. Ideally we should also be able to run a small web server on this machine (LAMP stack) to run some planning and billing applications we wrote ourselves. We need some sort of MS Exchange alternative for things like a shared calendar and especially being able to set Out of Office replies. We have one printer that is connected to the network Setup should be something can preferably be managed easily via a graphical interface and NOT require command line skills. Users want to keep using Apple Mail or MS Outlook After a quick google I came across the Zimbra collaboration suite, can anyone recommend this or any other solution for our office?

    Read the article

  • What is the best private cloud storage setup

    - by vdrmrt
    I need to create a private cloud and I'm searching for the best setup. These are my 2 most important requirements 1. Disk and system redundant 2. Price / GB as low as possible The system is going to be used as backup setup which will receive data 24/7 over SFTP and rsync. High throughput is not that important. I'm planning to use glusterfs and consumer grade 4TB hard-drives. I have worked out 3 possible setups 3 servers with 11 4TB HDD Setup up a replica 3 glusterfs and setup each hard drive as a separate ext4 brick. Total capacity: 44TB HDD / TB ratio of 0.75 (33HDD / 44TB) 2 servers with 11 4TB HDD The 11 hard-drives are combined in a RAIDZ3 ZFS storage pool. With a replica 2 gluster setup. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) 3 servers with 11 4TB consumer hard-drives Setup up a replica 3 glusterfs and setup each hard-drive as a separate zfs storage pool and export each pool as a brick. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) (Cheapest) My remarks and concerns: If a hard drive fails which setup will recover the quickest? In my opinion setup 1 and 3 because there only the contents of 1 hard-drive needs to be copied over the network. Instead of setup 2 were the hard-drive needs te be reconstructed by reading the parity of all the other harddrives in the system. Will a zfs pool on 1 harddrive give me extra protection against for example bit rot? With setup 1 and 3 I can loose 2 systems and still be up and running with setup 2 I can only loose 1 system. When I use ZFS I can enable compression which will give me some extra storage.

    Read the article

  • Freebsd jail for an small company - checklist - what shouldn't forget

    - by cajwine
    Looking for an checklist for an "small company freebsd/jail server". Having pretty common starting point: FreeBSD jail (remote/headless) for the company: public web, email, ftp server, and private (maybe in the future partially public) wiki (foswiki) 4 physical persons, (6 email addresses) + one admin - others will never use ssh) have already done usual hardening on the host side (like pf, sshguard etc). my major components are: dovecot, exim, apache22, proftpd, perl5.14. Looking for an checklist, what I shouldn't forget. My plan: openssl self-signed certificates for exim, dovecot and proftpd (wildcard keys) openssl self-signed certificate for apache (later will go for "trusted-signed" key) My questions are: is is an "good practice" having one pair of wildcard SSL-certificates for many programs? (exim, dovecot, proftpd) - or should I generate one key for each service? should I add all 4 persons as standard (unix) users, or I should go with virtual users? Asking because: have only small count of users, and it is more simple to configure everything (exim, dovecot) for local users ($HOME/Maildir), plus ability to set $HOME/.forward/vacation and etc. is here some (special) things what I should consider? (e.g. maybe, in the future we want setup our own webmail - will make this any difference?) any other recommendation? Thank you, hoping that this question fit into the http://serverfault.com/faq under the: Server and Business Workstation operating systems, hardware, software Operations, maintenance, and monitoring Looking for an checklist, but please explain why you're recommending it. See Good Subjective, Bad Subjective. related: What's your suggested mail server configuration for a FreeBSD server?

    Read the article

  • Completely unintuitive Apache/PHP memory-freeing behavior

    - by David
    Okay, this one's weird. I have a Turnkey Linux server with a gig of dedicated RAM. It's running WP3.2 with a boatload of plug-ins. It's a new site, so it has very limited traffic (other than search engines, maybe 20 hits a week). Now, for a few weeks, every few days, it would max out on main RAM, start eating up virtual RAM, and then crash. It's had this behavior for a while and I've been trying to figure out which element was causing the crash. Nine days ago, I pointed my external server monitor to this server. I wrote a 5-line HTML file (not PHP and not WP) that the server monitor accesses every minute, to see if the server is up. So, now, nine days later, the server has been rock solid, up all the time, no memory leak at all. I changed NOTHING on the server itself to see this behavior change. Have you EVER seen anything like this? All the server monitor is doing is retrieving a single, super-simple HTML file and all the memory leak problems have gone away. Weird, eh?

    Read the article

  • HP/Lenovo alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/Lenovo? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • is a wildcard SSL the only option in this multiple VHOST/1IP setup?

    - by solsol
    I have a web app set up that needs the following SSL encryption: secure.myapp.com -> SSL www.myapp.com/login -> SSL www.myapp.com/signup -> SSL If I'm correct, I could run one SSL certificate for my whole www.myapp.com/* pages. The problem is that I have a subdomain called secure.myapp.com that either needs to be on a separate IP address to work with SSL. Right now I have one server, one public IP and a number of Virtual Hosts in apache to make this work. I'd rather not buy an expensive Wildcard SSL certificate to secure just one subdomain. What is your advice on this? If it IS the only solution any tips on getting a price worthy wildcard SSL cert is appreciated. I have read about SNI that allows the use of multiple SSL certs, but not all browsers (IE6!) support this. Since we are building a web app for the public, we cannot have IE6 to run on unencrypted connections. Thanks for you help

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >