Search Results

Search found 7637 results on 306 pages for 'cd drives'.

Page 255/306 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • EC2: How dangerous is it to turn off fsck for EBS volumes?

    - by Janine
    I have been tearing my hair out trying to figure out why my EC2 instances (made from my own custom AMIs) were taking many tries to come up properly. They would fail with the following error: fsck.ext3: No such file or directory while trying to open /dev/sdf For both of the EBS volumes I was attaching during startup. Finally, I figured out the problem. I had put this in /etc/fstab: /dev/sdf /export ext3 defaults 1 2 /dev/sdi /export2 ext3 defaults 1 2 The 2 tells the system to fsck the drives on the way up. Changing this to /dev/sdf /export ext3 defaults 1 0 /dev/sdi /export2 ext3 defaults 1 0 Avoids the problem completely, but now the volumes are never going to be fsck'd. How much does this matter? Once the instance goes into production it's going to be running pretty much 24/7, so not many fscks would be happening anyway, but still... this just feels like a bad idea. I have not been able to find anyone else even reporting this problem (there are people with the same error message, but different causes). It seems unbelievable that I could be the only person to ever make this mistake, but perhaps I'm just talented that way. :) If there is another solution to the problem I would love to hear it; I have not been able to find one.

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • Deploying workstations - best practices?

    - by V. Romanov
    Hi guys I've been researching on the subject of workstation deployment for a while, and found a ton of info and dozens different methods and tools, but no "best practice" method that doesn't lack at least one feature that i consider required for the solution to be perfect. I'm currently interested in windows workstation deployment, but if the tools can be extended to Linux, then it's an added value. I want the deployment tools I use to be able to do the following: hardware independent - I want my image or installation to have a minimum of hardware and driver dependency, so that i can use a single image/package for all workstations easily updatable - I want to be able to update my image as easily as possible without redeploying/rebuilding/reimaging all configurations PXE bootable deployment - I want the tools to be bootable off the network so that I don't need a boot cd/DOK. scriptable for minimum human input - Ideally, the tool should run automatically after being booted and perform a "default" deployment (including partitioning) unless prompted otherwise. i.e - take a pc, hook it up, power on, PXE boot and forget about it until the OS is deployed. I found no single product or environment that does all this. Closest i came to is the windows deployment services/WIM image format. I also checked out numerous imaging and deployment tools including clonezilla, ghost, g4u, wpkg and others, but most of them lack the hardware Independence and updatability features. We currently have a Symantec Ghost server setup that does imaging over the network, but I'm not satisfied with it as it has all the drawbacks i listed above. Do you have suggestions how to optimize the process of workstation deployment? How do you deploy them in your organization? Thanks! Vadim.

    Read the article

  • Virtualmin & git integration

    - by weby3456
    I've installed virtualmin on my VPS to manage my websites. It's working perfect and as expected nearly a year now. Recently I wanted to add some features to one of my sites, and I need git integration. I've correctly installed git & gitweb on my server, and I can create repositories and watch them under http://sub.domain.com/git/gitweb.cgi Here is the current relevant directory tree: /home/user/domains/sub.domain.com/public_html/git/ drwxr-sr-x user user . drwxr-x--- user user .. -rw-r--r-- user user git-favicon.png -rw-r--r-- user user git-logo.png -rwxr-xr-x user user gitweb.cgi -rw-r--r-- user user gitweb.css drwxrwx--- apache user reponame.git /home/user/domains/sub.domain.com/public_html/git/reponame.git/ drwxrwx--- apache user . drwxr-sr-x user user .. drwxrwx--- apache user branches -rwxrwx--- apache user config -rwxrwx--- user user description -rwxrwx--- apache user HEAD drwxrwx--- apache user hooks drwxrwx--- apache user info drwxrwx--- apache user objects drwxrwx--- apache user refs But I have some questions: When I'm visiting http://sub.domain.com/git/gitweb.cgi, the owner is listed as 'Apache'. why? how can I change that? Usually, to create a new git repository, I'll do something like: $ mkdir proj $ cd proj $ git init Initialized empty Git repository in /home/user/proj/.git/ // here I'm creating the files or copy them from somewhere else $ git add *.php $ git add README $ git commit -m 'initial version' But after creating the repository in virtualmin, I can find a new dir named 'reponame.git' but not the '.git' dir. When I'm trying to run any git command (e.g. git status) I'm receiving "fatal: This operation must be run in a work tree". How can I work with that repository? Currently I need to explicitly grant access for users to be able to view the repositories via gitweb. How can I make certain repositories public?

    Read the article

  • No digital audio output with Asus Xonar DG

    - by Lunatik
    I've purchased an Asus Xonar DG as replacement for faulty onboard audio in a Medion 8822 as it has an optical output which is all I really need to feed my HTPC. I uninstalled the previous drivers/devices, switched the PC off, inserted the Asus card, powered up, disabled the onboard audio in the BIOS, then installed the driver that came on the CD (same version as on Asus' website as of today) and everything went perfectly - no errors. I set the audio devices up in Windows and in the Asus utility (SPDIF enabled, 6-ch audio) as I would expect to see them work, but the only thing is I have no digital audio from test tones within Windows/the Asus utility, PCM audio or Dolby Digital from DVD. Analogue audio is fine. I've uninstalled things and reinstalled a couple of times now, as well as trying almost all combinations of analogue/digital outputs but can't get it sorted. Does anyone have any tips on how to get this working? This card has just been released so there isn't much out there to go on. Notes: The light on the toslink port is lit. OS is Vista 32-bit SP2 and all up to date, pretty much a fresh install with almost no 3rd party applications installed This page seems to suggest that a digital output device in Windows is not needed with Xonar cards as it was with the previous Realtek so I have it set to Analog. The only other output device is S/PDIF pass-thru

    Read the article

  • ASUS EAH5450 Graphics Card (ATI Radeon HD5450 - 1 GB DDR3) on Windows 2003? Anybody got it to work?

    - by JJarava
    Hi all! I've just bought an ASUS EAH5450 Graphics Card (ATI Radeon HD5450, 1 GB DDR3) for my main system, but I haven't been able to make it work under Windows 2003 (my OS in that system). When I plugged the card, I got a couple of "installing drivers" prompt for things such as "ATI High Definition Audio Device" that got themselves sorted out of the Internet, and then a "Standard VGA Graphics Adapter". The CD that came with the card installs something called "ATI Catalyst Install Manager" and .net 2.0, but no drivers. I've downloaded the latest (WinXP 32bits) drivers from ATI, and the experience is the same: I don't get any drivers installed. My Motherboard is an ASUS A8N-SLI with nVidia nForce 4 chipset (for an Athlon 64X2, somewhat old), but my previous card was an ATi Radeon X700, so it's been working with ATI cards before. On POST, during boot I see a "Display Card" Device (Vendor ID 1002-68F9-0300) and a "Multimedia Device" (1002-AA68-0403), and when viewing the properties of the "Standard VGA", they match the device ID. Any hints? I'd really hate having to get rid of the card, and I'm sure it's not that strange what I'm trying to do...

    Read the article

  • Virtualization and best hardware sharing scenario for me

    - by azera
    Hello, Following this thread on super user, I now want to start installing all my vm on the hardware. As a remainder, i have a (powerful enough) server on which i want to install 3 OS: there is a debian (general dev testbed purposes), an ipcop (network control/firewall) and a freenas (local network file sharing). I'm wondering which scenario would be the best for me and if I will be able to share the hardware to do what i want; either a - install an hypervisor like the free vmware esx and all three vms in it, or b - install debian, and the other two running inside it with virtual box My need being that: the ipcop should handle all network traffic to the internet, meaning all traffic from my main computer but also all traffic from the other two vm the freenas shares should be accessible from the other two vm and my main computer too i don't really care about the debian access, i only need to access it from my main computer, not the other vms Will I need to install additionnal network cards for each vm or can they all share the same one happily ? (right now I have two, one linking the server to my router [which only ipcop is gonna use] and one linking it to my switch [which i would like all three to use]) As for harddrives, I was going to use 1 harddrive cut in 3 partitions to install all three OSes, then add to that the freenas drives, will it be correct ? Thanks a lot for anyone who can help me, this is kind of a vast area and I'm not sure which way to go at all

    Read the article

  • scp No such file or directory

    - by Joe
    I've a confusing question for which superuser doesn't seem to have a good answer, and neither google. I'm trying to scp a file from a remote server to my local machine. The command is this scp user@server:/path/to/source/file.gz /path/to/destination The error I get is: scp: /path/to/source/file.gz: No such file or directory user is my username on the server. The command syntax appears fine to me. ssh works fine and I can cd to the file and it doesn't seem to be an access control issue? Thanks; Edit: Thank you John. I spotted the issue. ls returned this: -r--r--r-- 1 nobody users 168967171 Mar 10 2009 /path/to/source/file.gz So, the file was on a read-only file system and user is able to read it but not scp. I just copied the file to a different directory and chown it and worked fine. It would be good if someone can explain why this is the case though.

    Read the article

  • How expensive to run PC 24/7 or how to figure out how to determine it?

    - by jasondavis
    I realize this question is difficult to answer as it would be different based on users location, what there PC is doing and what hardware it consist of, along with other factors but I am hoping someone could give me a very rough estimate. I have always ran many PC's in my home 24/7 and I am just now looking at it from a money/cost of electric point of view. 1) I live in Central Florida. Can anyone guesstomate/estimate the avaerage monthly or daily cost of running your average PC? Intel quad core processor, 1 SSD drive for OS and programs and 4-5 1-2 TB hard drives in a RAID setup for data. 750watt PSU. What would your guess be? 2) Also is there an accurate way to figure this out (non-super technical and confusing to a non-math person please) Also I have seen those kill-a-watt devices, do they figure this kind of stuff out for you? 3) Does a larger PSU make your PC consume more power? Thanks for any help, you can most likely tell I am somewhat lost about this!

    Read the article

  • Linux Mint 13 is not booting on dual boot computer

    - by Brian
    thanks in advance for your time. I have 2 hard drives in my computer a 300 GB drive which is my primary drive for windows 7 and a 1.5 TB drive that I'd used for storage. When I got it I partitioned 500 GB for use in Linux. So, I created a bootable USB and clicked the "Install by Current Operating System" option from Mint. It installed it to the free 500 GB like I'd hoped it would. Now, I can't get it to boot though. I've tried using EasyBCD to create the boot entry and it hangs on a black screen. Thanks. EDIT @ Ryhuk It presents a menu with two options 1) Windows and 2) Mint. This was a menu I created with easyBCD. When I select option 1 it boots to windows fine. When I select option 2 it hangs on a black screen with just a white bar flashing (Can't remember what its called, it marks the current cursor location on a text field) and won't respond to any key presses but alt ctrl del.

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • Windows xp recovery console without Ntfs.sys? (0x00000024 BSOD)

    - by Kalle
    I have two physical disks in a computer, for simlicity lets call them C and D. C: got Windows XP and D: got some data. The problem is that whenever i have D: connected i can't boot windows. I get some BSOD called 0x00000024/NTFS_FILE_SYSTEM. Same thing if i boot up windows with D: disconnected and then connect it once windows has loaded. The KB article about this problem says that i have to run chkdsk but i can't get to somewhere where i can run this because i get a BSOD whenever the disk is connected! Even the recovery-console BSODs if D: is connected. The final option in the KB is to boot the computer on Windows 2000 Setup disks where you edit some file to manually disable the ntfs.sys driver and then run chkdsk. The problem is that i don't have any floppy drive. Is there any way to boot the built in recovery console with ntfs.sys disabled or to burn the floppy version to a cd after you've extracted and modified it on the harddrive? Right now the Windows xp bootable floppy creator(2) is asking me which floppy drive to extract to which i can't answer because i have none :/ Other solutions to the root problem is also appreciated :) (2) ht tp://www.microsoft.com/downloads/details.aspx?FamilyID=55820edb-5039-4955-bcb7-4fed408ea73f&displaylang=en

    Read the article

  • lxc bandwidth control using tc

    - by kumar
    I am trying to restrict bandwidth inside my containers. I have tried using the following commands , But I think it is not getting effective. cd /sys/fs/cgroup/net_cls/ echo 0x1001 > A/net_cls.classid # 10:1 echo 0x1002 > B/net_cls.classid # 10:2 tc qdisc add dev eth0 root \ handle 10: htb tc class add dev eth0 parent 10: \ classid 10:1 htb rate 40mbit tc class add dev eth0 parent 10: \ classid 10:2 htb rate 30mbit tc filter add dev eth0 parent 10: \ protocol ip prio 10 \ handle 1: cgroup Here A and B are containers created with this command. lxc-execute -n A -f configfile /bin/bash lxc-execute -n B -f configfile /bin/bash Whereas configfile contains only this entry: lxc.utsname = test_lxc AFter starting the container , I have started vsftpd inside container A and try to access the files using the ftp client from another machine. Then I killed vsftpd in container A and started vsftpd in container B and try to access the files using ftp client from another machine. I cannot observe any difference in performance, for that matter it is nowhere nearer to 40mbit/30mbit. Please correct me whether anything wrong here.

    Read the article

  • Windows - Website unaccessible only on windows pcs in LAN

    - by DorentuZ
    For serveral days now, a website isn't accessible on a single pc in the LAN. On the other pc's, it works just fine. And it's just a single website that's not accessible as far as I know of. The website generates a timeout on every single web browser I've tried (IE8, Firefox and Chrome). However, traceroute, nmap and telnet all work just fine. I've even tried multiple user accounts and safe mode, but that didn't work either. As a side note: using a linux live cd did work and I could access the website without any problems. The hosts file is the windows default, the ip- and dns settings on the network adapter normal as well. No strange processes are running and no viruses found. According to tcpview and netstat there are connections to the domain, but every request in the browser results in a timeout.. Any idea what's happening? Update: All of the computers on the network running Windows (any version) are showing this problem now. The website is still working under linux and mac osx. So, it has to be related to some kind of windows update (although I haven't installed any on one computer in the past week, which I've set to do manual updates only)..

    Read the article

  • Computer Comparison - which is "better"

    - by David Murdoch
    A company I work with recently replaced their old server and gave it to me. Their old server is a Dell PowerEdge 2600. I've been playing with the machine and even installed Windows Server 2008 on it...and it seems to run it pretty well. Here are the specs for the two machines: Dev Machine: AMD Athlon64 3000+ 2.38 GHz (overclocked from 1.8GHz [@ 280x8.5] - it is stable-ish) Memory (RAM): 1x1GB OCZ PC3200 (Dual-Channel) 300GB HD OS: Windows XP Pro (32bit) SuperPi 1M digit test: 40 seconds Dell PowerEdge 2600 Server: Intel Xeon CPU 2.8GHz 2.8GHz Memory (RAM): 512MBx2 (PC2700, not dual channel) 68GB HD (RAID 5) OS: Windows Server 2000 (32bit) SuperPi 1M digit test: 56 seconds [using 1 processor] (Themes and Aero-Flass UI turned off, of course) I use my computer to regularly run Photoshop CS5, Illustrator CS5, Flash CS5, 5 browsers (Chrome, FF, IE, Safari, Opera), iTunes, Visual Studio 2010, and Kaspersky Internet Security 2010 [sometimes simultaneously :-) ]. The SuperPi test has my dev machine coming in about 30% faster than the Server machine...though this could be due to the server running "Vista" with background processes prioritized. Do you think it would be realistic/advantageous for me to move from my dev machine to the Dell PowerEdge 2600? Is it possible to install additional DVD drives/burners on the server? Can I install my internal 300 GB hard drive on the server? Can I add some USB 2.0 ports? Note: I'll probably install Win XP Pro on the dev machine if I do switch. If not, are there any creative and useful way for me to take advantage of this server (with the goal of faster computing)?

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • Huge host CPU usage in idle vmware guest. Ubuntu 10.04 host, Vista SP2 guest

    - by themesandmodules
    I'm experiencing huge host CPU usage with an idle vmware guest. Host: Ubuntu 10.04 32-bit 2.6.32-24-generic-pae. (Very new install, i.e 24 hours ago) Hardware is Dell XPS M1530 laptop, 4GB ram. Intel Core II Duo T9300 2.50Ghz The virtualization setting "VT" or something is enabled in my bios. Guest: Completely fresh install of Windows Vista, upgraded to latest SP2 and all windows updates installed. 1024 - 1512MB ram allocated. Absolutely no other software installed on it, apart from VMWare tools. Situation When the guest is doing absolutely nothing, I watch with sysinternals process watch on the guest. This shows that system idle process is between 70 and 99%, usually around 95%. No actual process doing anything. On the host, I watch with top, I get cpu usage of 20% - 80%, usually around 30%. What I have tried Single and Dual processor available to guest - no change. Turn off all peripherals to guest - no network, drives, usb etc - no change. Turn off 3d acceleration for guest - perhaps a small improvement, or no change. Upping allocated ram to guest from 1024MB to 1512MB - no change. Yelling at vmware - no change. I have experienced a similar issue in the past, which was solved by setting the guest to have 1 CPU. This time that hasn't worked.

    Read the article

  • Why can't I install Java?

    - by Patrick
    I've been searching high and low for someone who can help me. I've been trying for a month. A month. to install Java on my new PC to no avail. No tech support forum can seem to help me. It all started while playing tekkit one day. I kept running out of memory (using the 32-bit JRE 7 u45) so I decided to install the 64-bit version. I uninstalled the 32-bit version first, for some reason, and downloaded the 64 bit runtime. In the installer, I go through all the normal screens until the installation progress bar appears. Then, it just sits there. No progress is made. No CPU is used by the installer, or any of its dependencies. The installer will stay like this for hours, days, and in one case a whole week without doing anything at all. I've tried installing older versions, the 32-bit version, even Java 6 and none of them will install. UAC is disabled, I've run regedit, CCleaner, and any other "fix-it" program there is. It's getting to the point where I may just have to wipe my hard drives and start over. I have several applications that require java, so this is an absolute necessity. Please, please, someone have the answer. Here are my system specs: -Intel i7-3770k -AsRock z77 Extreme3 -Samsung 840 pro SSD -WD Caviar Black 1tb

    Read the article

  • Debian software raid 1: boot from both disk

    - by bsreekanth
    I newly installed debian squeeze with software raid.The way I did was, as also given in this thread. I have 2 HDD with 500 GB each. For each of them, I created 3 partitions (/boot, / and swap) I selected the hard drive and created a new partition table I created a new partition that was 1GB. I then specified to use the partition as a Physical Volume for RAID. and used for /boot and enabled bootable. Created another partition, which is of 480 GB, and then specified to use the partition as a Physical Volume for RAID. and used for /. Created another partion and used for swap Then RAID configuration: Through Configure RAID menu - create MD device - (2 for the number of drives, 0 for spare devices) Next select the partitions you want to be members of /dev/MD0. I selected /dev/sda1 and /dev/sdb1 (for /boot) Next select the partitions you want to be members of /dev/MD1. I selected /dev/sda6 and /dev/sdb6 (for /) And no RAID for swap partitions 'Finish Partitioning and write changes to disk' -- Finish the rest of the install like normal Everything is ok now, except I am not sure how to test my raid config. When I pull the power of the HDD, it only boots from one disk. I read in some forum that I may have to install GRUB manually on the other. In Debian Squeeze, there is no grub command. Not sure how to make my software raid bootable from both disk. Also, please comment on my steps above. Anything unusual. I configured /boot partitions of both disks to be boot=yes. Not sure whether that is ok. Thanks, Bsr

    Read the article

  • Dedicated server with a lot of storage and good support - and cost-effective

    - by Martin Burger
    Hello, I am from Germany and looking for a dedicated server located in the US with a lot of storage: 750 - 1500 GB. CPU speed and amount of memory are secondary, the server will host large amounts of media files via http and ftp - the basic task is to help people exchange media files. In Germany, there are some good offers, like "Root Server EQ6" at www.hetzner.de. For example, that company provides support of high quality, and their plans are very cost-effective. The plan mentioned above costs about $90 per month and provides two 1500 GB SATA-II HDDs (Software-RAID 1). In the US, I found (amongst others) Go Daddy and rackspace. Go Daddy offers some "Storage Monster" plans that include 2 x 1,000 GB hard drives for about $180 per month - already twice as much as Hetzner above. However, I found some blog and forum entries that complain about the support provided by Go Daddy. Rackspace seems to provide decent support, but they are very "upscale". Their dedicated servers are customizable and start at $419 - thus, about 4.5 times as much as Hetzner. Can anybody recommend a solution / plan that is comparable to the one by Hetzner? Or are prices for dedicated servers in general much higher than in Germany? Regards, Martin

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • No digital audio output with Asus Xonar DG

    - by Lunatik
    I've purchased an Asus Xonar DG as replacement for faulty onboard audio in a Medion 8822 as it has an optical output which is all I really need to feed my HTPC. I uninstalled the previous drivers/devices, switched the PC off, inserted the Asus card, powered up, disabled the onboard audio in the BIOS, then installed the driver that came on the CD (same version as on Asus' website as of today) and everything went perfectly - no errors. I set the audio devices up in Windows and in the Asus utility (SPDIF enabled, 6-ch audio) as I would expect to see them work, but the only thing is I have no digital audio from test tones within Windows/the Asus utility, PCM audio or Dolby Digital from DVD. Analogue audio is fine. I've uninstalled things and reinstalled a couple of times now, as well as trying almost all combinations of analogue/digital outputs but can't get it sorted. Does anyone have any tips on how to get this working? This card has just been released so there isn't much out there to go on. Notes: The light on the toslink port is lit. OS is Vista 32-bit SP2 and all up to date, pretty much a fresh install with almost no 3rd party applications installed This page seems to suggest that a digital output device in Windows is not needed with Xonar cards as it was with the previous Realtek so I have it set to Analog. The only other output device is S/PDIF pass-thru

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Have an Input/output error when connecting to a server via ssh

    - by Shehzad009
    Hello I seem to be having a problem while connecting to a Ubuntu Server while connecting via ssh. When I login, I get this error. Could not chdir to home directory /home/username: Input/output error It seems like my home folder is corrupt or something. I cannot ls in the home folder directory, and in my usename directory, I can't cd into this. As root I cannot ls in the home directory as well or in any directory in Home. I notice as well when I save in vim or quit, it get this error at the bottom of the page E138: Cannot write viminfo file /home/root/.viminfo! Any ideas? EDIT: this is what happens if I type in these commands mount proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) /dev/mapper/RAID1-lvvar on /var type xfs (rw) /dev/mapper/RAID5-lvsrv on /srv type xfs (rw) /dev/mapper/RAID5-lvhome on /home type xfs (rw) /dev/mapper/RAID1-lvtmp on /tmp type reiserfs (rw) dmesg | tail [1213273.364040] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213274.084081] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213309.364038] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213310.084041] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213345.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213346.084042] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213381.365036] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213382.084047] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213417.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213418.084063] Filesystem "dm-4": xfs_log_force: error 5 returned. fdisk -l /dev/sda Cannot open /dev/sda

    Read the article

  • Unable to install mod_wsgi on CentOS 5.5 VPS...

    - by jasonaburton
    I am trying to install mod_wsgi on my VPS, but it won't work. This is what I am doing: wget http://modwsgi.googlecode.com/files/mod_wsgi-2.5.tar.gz tar xzvf mod_wsgi-2.5.tar.gz cd mod_wsgi-2.5 ./configure --with-python=/opt/python2.5/bin/python After I run the above command, I get this error: checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1298: apxs: command not found ./configure: line 1298: apxs: command not found ./configure: line 1299: /: is a directory ./configure: line 1461: apxs: command not found configure: creating ./config.status config.status: creating Makefile config.status: error: cannot find input file: Makefile.in Through some research I've discovered that I need to modify my command: ./configure --with-apxs=/usr/local/apache/bin/apxs \ --with-python=/usr/local/bin/python But, /usr/local/apache/ doesn't exist, or so that's what it is telling me. If it doesn't exist, how do I create it with all the files needed, or if apache is located elsewhere on my VPS where would it be located? I'd also like to mention that I ran a command to install apache before this entire deal: yum install httpd so I assumed that was all I needed but apparently not (I am very new at all this server administration stuff so please be gentle) EDIT: This is the tutorial that I have been using to get this all set up: http://binarysushi.com/blog/2009/aug/19/CentOS-5-3-python-2-5-virtualevn-mod-wsgi-and-mod-rpaf/ I got stuck at the heading "Installing mod_wsgi" Thanks for any help!

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >