Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 545/728 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

  • Tri-head linux system with Xmonad: is it possible to have HW acceleration

    - by progo
    What means there exists to have three monitors, all controlled by Xmonad and have hardware 3D acceleration as well? I had the pleasure of using three monitors earlier this year, and while Xmonad and Xinerama handle three monitors easily, I had to throw in an extra display driver, and also let go of Nvidia's own TwinView (which is a hack on Xinerama). This left me with no HW acceleration and some flickering as double buffering wouldn't work with certain applications. However, the three monitors handle so beautifully that I had hard time coming back to two. I understand the easiest way to achieve HW-accelerated tri-head combo is to split into two Xorgs. I wouldn't be able to switch windows between the Xorgs, so I'm not really into this solution. What's more, having a cheap and old PCI card along with even slightly better PCIe seemed to slow things down. Even if I occasionally disabled the third monitor from Xorg configure, I couldn't get HW acceleration to work. Only after I physically disconnected the old PCI card, I could get the games back in business. Would a Matrox Dual/Tri-head2go and a powerful Nvidia GPU do the trick? I understand Xmonad can be configured to "believe" that a "single" (as Dualhead2Go will merge) 3360x1050 display is actually two different ones? So that Xmonad's Mod-w and Mod-e would work properly there.

    Read the article

  • Domain Controller DNS Best Practice/Practical Considerations for Domain Controllers in Child Domains

    - by joeqwerty
    I'm setting up several child domains in an existing Active Directory forest and I'm looking for some conventional wisdom/best practice guidance for configuring both DNS client settings on the child domain controllers and for the DNS zone replication scope. Assuming a single domain controller in each domain and assuming that each DC is also the DNS server for the domain (for simplicity's sake) should the child domain controller point to itself for DNS only or should it point to some combination (primary VS. secondary) of itself and the DNS server in the parent or root domain? If a parentchildgrandchild domain hierarchy exists (with a contiguous DNS namespace) how should DNS be configured on the grandchild DC? Regarding the DNS zone replication scope, if storing each domain's DNS zone on all DNS servers in the domain then I'm assuming a DNS delegation from the parent to the child needs to exist and that a forwarder from the child to the parent needs to exist. With a parentchildgrandchild domain hierarchy then does each child forward to the direct parent for the direct parent's zone or to the root zone? Does the delegation occur at the direct parent zone or from the root zone? If storing all DNS zones on all DNS servers in the forest does it make the above questions regarding the replication scope moot? Does the replication scope have some bearing on the DNS client settings on each DC?

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • Shell not finding binary when attempting to execute it (it's _definitely_ there)

    - by eegg
    I have a specific set of binaries installed at: ~/.GutenMark/binary/<binaries...> These were previously working correctly, but for seemingly no reason when I attempt to execute them the shell claims not to find them: james@anubis:~/.GutenMark/binary$ ls -al ... -rwxr-xr-x 1 james james 2979036 2009-05-10 13:34 GUItenMark ... -rwxrwxrwx 1 james james 76952 2009-05-10 13:34 GutenMark ... -rwxr-xr-x 1 james james 10156 2009-05-10 13:34 GutenSplit ... james@anubis:~/.GutenMark/binary$ ./GutenMark bash: ./GutenMark: No such file or directory james@anubis:~/.GutenMark/binary$ I've tried to isolate the cause of this, with no success. The same happens with zsh, bash, and sh (all giving their appropriate "file not found" error -- this is definitely not a strange output from the binary itself). The same happens either as user james or as root. Nor is it directory specific; if I move the whole directory installation, or just a single binary, to anywhere else, the same happens when attempting to execute it. The same even happens when I put the directory in $PATH and just execute "GutenMark". It also happens when I execute it from a script (I've tried Python's commands module -- though this appears to just call sh). The problem appears to be specific to the binaries themselves, yet they appear to never actually get executed. Any ideas?

    Read the article

  • Looking to replace Ghost with FSArchiver or Clonezilla, few questions about capabilities

    - by Daniel Wright
    I work for a PC Repair company and we are looking into setting up a dedicated machine with externally accessible SATA bays to clone harddrives as a safety net incase something goes wrong during a repair. We currently use a SATA/PATA to USB bridge called MagicBridge and Norton Ghost on any workstation, but we're looking to move away from Ghost. We have a computer with a large RAID5 array with Windows Server 2008 Standard currently installed, but this can be replaced with a flavour of *nix. I have some experience with Clonezilla, but FSArchiver also seems like a suitable replacment too. My Head Technician wants to know if my chosen solution (probably Clonezilla or FSArchiver, but I'm open to free suggestions) is capable of: Cloning a degraded RAID, such as a single drive from a RAID1 mirror without complaining Producing images that are easily mountable (he'd prefer them to be mountable in Windows, but if there is no other easy way, *nix should be fine) akin to Ghost Explorer so individual files can be restored as well as being able to do bare metal restores. My apologies for wordiness but I wanted to be thorough in my explaination. Thanks for any suggestions or tips :) EDIT: I've just found out that Clonezilla has a workaround for cloning RADI1 drives EDIT2: Found the answer to both of my questions, aparently I wasn't phrasing my searches right, could this question be deleted please?

    Read the article

  • open_basedir problems with APC and Symfony2

    - by Stephen Orr
    I'm currently setting up a shared staging environment for one of our applications, written in PHP5.3 and using the Symfony2 framework. If I only host a single instance of the application per server, everything works as it should. However, if I then deploy additional instances of the application (which may or may not share the exact same code, dependent on client customisations), I get errors like this: [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Warning: require(/var/www/vhosts/application1/httpdocs/vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php): failed to open stream: Operation not permitted in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 [Tue Nov 06 10:19:23 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '/var/www/vhosts/application1/httpdocs/app/../vendor/doctrine-common/lib/Doctrine/Common/Annotations/AnnotationRegistry.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/vhosts/application2/httpdocs/app/bootstrap.php.cache on line 1193 Basically, the second site is trying to require the files from the first site, but due to open_basedir restrictions it can't do that. I'm not willing to disable open_basedir as that is only masking the problem instead of solving it, and creates a dependency between applications that should not be present. I initially believed this was related to a Symfony2 error, but I've now tracked it down to an issue with APC; disabling APC also solves the error, but I'm concerned about the performance impact of doing so. Does anyone have any suggestions on what I might be able to do?

    Read the article

  • My Computer hangs for a few minutes just after startup, and then is fine.

    - by EvilChookie
    So I just built myself a reasonably beefy computer, and I installed Windows 7 on it. However, I start the machine up each morning and within a few minutes, the computer will semi hang. That is the mouse is responsive, and most of the time I can open task manager, or a new tab in Chrome. Occasionally windows will be labelled as 'Not responding'. Then, the machine will get over it's problem, and will be nice and quick until I turn it off. Here's my specs: CPU: AMD Phenom-II X4 955 Black (Quad Core, 3.2ghz) RAM: 4GB of DDR3 1300 MOBO: ASUS M4A785T-M (Latest BIOS) HARD DRIVES: 2x1TB Western Digital Caviar Blacks in RAID-0. OS: Windows 7 Ultimate x64. GPU: ASUS GT240 1GB. I believe this issue relates to the RAID array, as I didn't have the lockup problem before I created the array. I purchased a second drive and reformatted after creating a RAID array, since the single drive was a little on the pokey side (compared to the rest of the computer). What I have tried: Updated Raid Drivers Malware checks Windows Updates Unecessary Services CPU and Disk activity appears to be low (via Resource Monitor) No strange errors in the error log. Any thoughts?

    Read the article

  • "Delivered-To" Header in Exchange

    - by Kaii
    In some SMTP server implementations (i.e. Postfix) you can enable Delivered-To and X-Original-To headers that will be added to your email. (or [X-]Envelope-To) This is very helpful with distribution lists to determine which e-mail address the mail has been redirected to. So, when the mail has been sent to [email protected], you can see in the Delivered-To or Envelope-To header that it has been redirected (distributed) to [email protected], which is one of many other e-mail addresses that are linked to a single mailbox. How do I find which address was used to deliver this mail to a specific mailbox on Microsoft Exchange 2010? Looking at the plain message (with all headers) i can not find any information that the mail arrived via address [email protected] I think I need the Delivered-To header (or a similar one) to be set on Microsoft Exchange when a mail is delivered via distribution lists. Is there any way to enable such header in Exchange 2010? I need it so that our Ticket system (OTRS) correctly recognizes where the ticket belongs to. Adding all the e-mail addresses of all distribution lists to the system configuration is not the right solution. And if there is a solution for Exchange 2010, is this possibly also applicable to Exchange 2007?

    Read the article

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • Computer turns itself on after any off mode

    - by Patrick
    Whenever I shut down my computer, or put it in sleep/hybernate, it turns on after two seconds. It doesn't post, it just powers on and then idles. To actually turn it off, I switch off the psu. The problem is now, whenever I switch the psu on and try to boot, it doesn't always turn on. It takes a good amount of flicking the psu switch on and off before the motherboard lights up. So far I've determined the things its not: its not caused by the mouse or network waking up the computer. I've been able to go into hybernate for the past year. And all "wake on X" settings in the bios are diabled. its not a scheduled task waking up the computer at a given hour, it occurs every single time its not due to an upgrade or new installation, since I haven't done either in a very long time I'm sure its a hardware issue. So I'd like to know, is my psu dead, or the motherboard? The psu is an Antec Earthwatts 600w, the motherboard is an Asus P5Q-E, both one year old.

    Read the article

  • tail -f and then exit on matching string

    - by Patrick
    I am trying to configure a startup script which will startup tomcat, monitor the catalina.out for the string "Server startup", and then run another process. I have been trying various combinations of tail -f with grep and awk, but haven't got anything working yet. The main issue I am having seems to be with forcing the tail to die after grep or awk have matched the string. I have simplified to the following test case. test.sh is listed below: #!/bin/sh rm -f child.out ./child.sh > child.out & tail -f child.out | grep -q B child.sh is listed below: #!/bin/sh echo A sleep 20 echo B echo C sleep 40 echo D The behavior I am seeing is that grep exits after 20 seconds , however the tail will take a further 40 seconds to die. I understand why this is happening - tail will only notice that the pipe is gone when it writes to it which only happens when data gets appended to the file. This is compounded by the fact that tail is to be buffering the data and outputting the B and C characters as a single write (I confirmed this by strace). I have attempted to fix that with solutions I found elsewhere, such as using unbuffer command, but that didn't help. Anybody got any ideas for how to get this working how I expect it? Or ideas for waiting for successful Tomcat start (thinking about waiting for a TCP port to know it has started, but suspect that will become more complex that what I am trying to do now). I have managed to get it working with awk doing a "killall tail" on match, but I am not happy with that solution. Note I am trying to get this to work on RHEL4.

    Read the article

  • Multi domain on my dedicated server with Apache2

    - by x4vier
    I setup a server with Ubuntu 10.04 server edition. It's works for a long time with a single domain name. Now i want to add another domain wich will pointed to a new directory. I tried to change my Apache2 configuration but it does not seems to work properly. Here is my /etc/apache2/sites-available/default <VirtualHost *:80> DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName mydomain.com ServerAlias www.mydomain.com DocumentRoot /var/www/mydomain </VirtualHost> here is my /etc/hosts 127.0.0.1 localhost **.***.133.29 sd-***.****.fr sd-**** **.***.133.29 mediousgame.com # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ****::0 ip6-localnet ****: :0 ip6-mcastprefix ****::1 ip6-allnodes ****::2 ip6-allrouters ****::3 ip6-allhosts With this configuration when i try to access to mydomain it redirect to the /var/www/ content. Do you have any idea to redirect to the right folder ?

    Read the article

  • Virtual machines with failover setup

    - by kimmmo
    We have three servers and our plan is to run a number of virtual machines on them in such manner, that if one of the nodes blow up, we can either quickly or seamlessly get a spare running on another node. In addition to the normal networking, they're interconnected via dual 10Gbit NIC's, so networked raid/mirroring shouldn't be a problem. The guest VM's are mostly going to be running text mode linux, but of course it wouldn't hurt to be able to spin up a non-mission critical windows guest for running Visual Studio or checking IE compatibility of a web app. We've spent some time trying to get some magical cloud setup running using Stackops and Crowbar but it started to look like they were offering way too much and were too complicated for our needs. The next candidate, I think, is Ubuntu 11.04 server + KVM + Ganeti + Drbd, unless you can come up with a suggestion for a better solution that we have missed. Requirements: Installation should be simple or at least understandable without being in the dev team A browser interface for creating and managing VM's is a nice bonus Single node's hardware failure should cause minimal downtime for VM's that were running on that node Adding more nodes should be possible without shutting down the VM's.

    Read the article

  • Have it fixed or buy a new one?

    - by Workshop Alex
    My dual-monitor system has just become a single-monitor system again when the older monitor decided it would be nice to just turn to black. It's a Samsung LCD monitor and is over three years old. Not sure if the warranty is still valid but I just wonder what option would me more efficient: 1) Have the monitor fixed for a small amount. 2) Buy a new monitor for a slightly bigger amount. When monitors were still expensive, I wouldn't doubt about this and would just have my monitor repaired. But prices are so low nowadays, (and repairs are expensive) that I wonder if it's worth the trouble... Of course, I'm in no hurry since I still have another monitor. It's just that I liked the dual-monitor setup. Solved! Just ordered a new monitor. A Samsumg Syncmaster T260HD 25,5". Much more than it would cost me if I just had my old one repaired but I noticed that this one has a build-in TV tuner, plus speakers. It's way more expensive than a repair, but it's worth the additional value it provides.

    Read the article

  • How to set up the jdbc driver to connect to hsqldb from libreoffice?

    - by rumtscho
    I am trying to "split" a LibreOffice .odb file into a HSQL database and an OpenOffice document containing forms and macros. I am trying to follow the instructions from this thread: Within a few minutes you can convert your embedded HSQLDB to a stand-alone HSQLDB which is just a very fine database engine. 1) Download and extract the current version from http://hsqldb.org/ and point the Java class path in ToolsOptionsJava to the new hsqldb.jar 2) Extract the database folder from your embedded database and rename the files data, properties, script to name.data name.properties, name.script where "name." is an arbitrary name prefix. 3) Connect a Base document to an existing JDBC database such as jdbc:hsqldb:file:/home/chenier/hsqldb/name;default_schema=true;shutdown=true;hsqldb.default_table_type=cached;get_column_name=false (again, "name" refers to your own file name prefix). This local single-user connection gives you much more than the embedded HSQLDB. 4) Copy queries, forms and reports from the old database over to the new one. The wizard presents me with a window expecting two inputs: a "Datasource URL" and a "JDBC driver class". As far as I can tell, the tutorial above only tells me what to put into the Datasource URL. As for the JDBC driver class, I have no idea what to write into this field. I tried the fully-qualified name of the Java class, org.hsqldb.jdbc.JDBCDriver as given in the HSQLDB documentation. When that failed, I tried the physical path /var/lib/hsqldb/lib/hsqldb.jar (although that should have been unnecessary, because first I pointed to this path as described under 1 and then restarted LibreOffice). In both cases, "Test class" failed with the message "The JDBC driver could not be loaded". OpenOffice's documentation doesn't say anything sensible about the field, it was something like "enter the JDBC driver in this box". Any ideas what I should enter there to get the connection working?

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • Scheduling Automatic Backups for Virtual Private Web Server running CENTOS 6.3 and WHM

    - by Oliver Farrell
    I'm pretty new to administering my own VPS - but thus far am finding it quite a compelling experience. There's something quite refreshing about having complete control over everything it does. One thing that I would like to look at is a suitable backup solution (a few times a day). My current setup is as follows: I'm running a CENTOS 6.3 VPS with a single 25GB hard drive solely for the purpose of hosting websites. I'm using WHM & cPanel for administering them. I now plan on adding an additional hard disk and hooking it up to my VPS. What I'm not sure about is how I get the two disks talking and get the backup process going. I'm not a seasons SSH-er so don't really know where to start. I'm hosting with Serverlove (one of the best hosting providers I've used) and am provided with a number of unique identifiers for each hard disk so I imagine these may play a part in linking them together. I appreciate that this is a little vague (I'm clutching at straws) but any assistance is very much appreciated.

    Read the article

  • Drivers and firmware for the LiteOn ihap-122-9 DVD drive

    - by Sandy
    I'm trying to replace the DVD drive in my old PC. LiteOn.com is a mess and I can't find a single working driver or firmware update there, or anywhere. Windows XP tries to use a default, generic driver dated 2001. (About 9 years before this drive even existed.) http://www.firmwarehq.com/download_1..._6L0H.EXE.html This correctly finds my LiteOn ihap-122-9 DVD Drive. It correctly finds that I'm currently using firmware 6L0F. It correctly tries to install 6L0H. It completes 100% but then just fails and says "contact your vendor". Does anyone know why? Where can I actually get drivers... and firmware updates... that actually work for the ihap-122-9? Apparently, the newest driver IS the 1 made 9 years before the drive existed. (Unbelievable.) And the latest firmware is the 1 that is already in the drive. (Common.) No other drive I've had in this computer ever had a problem. This brand new LiteOn is doing this: Opening MY COMPUTER now takes 60 seconds. MY COMPUTER marking the drive as "DVD F:" takes another 30 seconds. MY COMPUTER showing "Batman II" title takes another 15 seconds. Clicking and running the movie will take another 30 seconds for the main-menu to appear. The movie starts about 20 seconds later. The movie runs fine for 1-2 seconds... then stops for 5 seconds.... then starts again and plays for 1-2 seconds. Repeats for 2 hours. (It happens with all store-bought DVDs and all home-made DVDs.)

    Read the article

  • Installing and running a guest OS on KVM-qemu with only serial console access

    - by nixnotwin
    I am trying to installing a bsd distro with virt-installer. With a Linux distro I used this: virt-install -n debian -r 1024 --vcpus=1 --accelerate -v --disk /var/kvm/installation-disks/debian.img,size=6--nographics --network=bridge:br0,model=ne2k_pci,mac=52:54:00:66:68:09 -l http://ftp.de.debian.org/debian/dists/squeeze/main/installer-amd64/current/images/ -x console=ttyS0,115200 This loads the installer directly from the online mirror. With Fedora I used this mirror: http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/fedora/linux/releases/16/Fedora/x86_64/os/ Are there such mirrors for freebsd or openbsd? The reason I want direct installable ftp/http mirrors is because I can access my physical server only via ssh, and it doesn't have a X server or a window manager to give me a VNC GUI. When I tried installing centos 6 with an online mirror I was able to finish the installation via serial console, but after I rebooted it, the serial console never worked for me. I tried everything possible---editing menu.lst, inttab and securtty files. Fedora 16 booted fine from serial console, but got stuck when it loaded anaconda installer. I tried editing freebsd iso installation media by adding serial console option to boot option. And installation was successful. But couldn't boot into it becuase it wasn't giving console acess. I couldn't edit any files as ufs partition cannot be loaded with write access on my Ubuntu server 10.04. Only debian squeeze worked well, it worked for me even without editing a single configuration file. I want to have CLI versions of fedora/centos and freebsd/openbsd. But, looks like there isn't any hope for me to have them, as I have to depend on a serial console to do everything.

    Read the article

  • ProxyPass for specific vhost with mod_rewrite

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • How does one open multiple tabs in TextWrangler?

    - by Closure Cowboy
    No, I'm not bluffing. I really can't figure this out. The setup: I went to File -> Open, and then selected a directory rather than a file. As expected, a directory tree opened on the left side of my document. Hooray! I can easily view the files' structure in my Rails project. So, I make a few changes in a file, and then I click on a different file in the directory tree. My problem: TextWrangler then asks me whether I want to save my changes. Huh? I say "No", and the new document doesn't open at all. Great. I try hitting Command+N (new document). A new window opens. Ughhhh. How the heck do I open documents in a new tab? Note: I have set the "New & opened documents" behavior to "Open in the front window". This does not change the behavior described (i.e. when a directory is opened rather than a single file).

    Read the article

  • Can't pin modified shortcuts to the Windows 7 task bar

    - by Coder
    I have a shortcut to a .bat file which I pin to the task bar using a workaround by using another icon and this seems to work. Now I make a copy of that shortcut, point it to a different .bat file, rename it, and I can't pin this one to the task bar. I have to find some other new unused icon to pin, pin it, then modify it manually. The other problem this causes is that windows seems to track which icons were pinned even if they are modified after the fact. As such, if I use media player as my dummy icon, pin it, then alter it's name and shortcut to point to a .bat file, I can't re-pin windows media player and if I select unpin from the windows media player, it unpins my shortcut to my .bat file. I can't believe how ridiculous this is. Is there a way to pin anything I want to the taskbar (ie. .bat file in my case) that does not cause problems like this? Is there an easy way I can copy an existing shortcut and modify it and re-pin it to the taskbar? The reason I want to copy it is because I start a .bat file (in particular git bash) and I set properties on the window like quick edit, increase the screen buffer and set it's position and size manually. I don't want to have to do this to every single icon I want to pin since they will be identical aside from the shortcut url.

    Read the article

  • Dell R320 RAID 10 with CacheCade

    - by Geekman
    I'm looking for a higher-performance build for our 1RU Dell R320 servers, in terms of IOPS. Right now I'm fairly settled on: 4 x 600 GB 3.5" 15K RPM SAS RAID 1+0 array This should give good performance, but if possible, I want to also add an SSD Cache into the mix, but I'm not sure if there's enough room? According to the tech-specs, there's only up to 4 total 3.5" drive bays available. Is there any way to fit at least a single SSD drive along-side the 4x3.5" drives? I was hoping there's a special spot to put the cache SSD drive (though from memory, I doubt there'd be room). Or am I right in thinking that the cache drives are simply drives plugged in "normally" just as any other drive, but are nominated as CacheCade drives in the PERC controller? Are there any options for having the 4x600GB RAID 10 array, and the SSD cache drive, too? Based on the tech-specs (with up to 8x2.5" drives), maybe I need to use 2.5" SAS drives, leaving another 4 bays spare, plenty of room for the SSD cache drive. Has anyone achieved this using 3.5" drives, somehow?

    Read the article

  • GA 8KNXP Rev1.0: 4GB installed, only 3.5 recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4GB. The specs from the manual say: Maximum memory support: 4GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >