Search Results

Search found 8472 results on 339 pages for 'boot'.

Page 197/339 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Arch linux - strange behaviour after installing fglrx

    - by kosto
    I have a problem with drivers on arch linux. I installed catalyst through unnoficial catalyst repo as wiki says. pacman -S catalyst catalyst-utils aticonfig --initial After this operation i rebooted the system. KDM loaded succesfully, but when i tried to switch to console (ctrl+alt+1/2/3) i saw only some strange dots, like pixels from the text were splitted on the whole screen. I was able to go back to kdm and enter the account details tho. This gave me a hang just before kde loaded. Here's a video where i'm showing above actions. Anybody knows what caused the problem? I can still chroot to fix some issues. Thanks for interest. http://glothriel.org/arch/arch_problem.ogg same thing on gnome / gdm, that's my second try on installing catalyst on arch. Open drivers suck the battery 2x faster. ___________EDIT_____________ Ok, i found a sollution, so i'm posting if someone else shares my problem. Catalyst does not support KMS, so you need to disable it from grub. You must know where are your /etc and /boot paritions mounted. If you have only one partition for / it's even simplier. Mount / on /mnt mount /dev/sdaX /mnt where X is number of the partition where is your / installed arch-chroot /mnt nano /etc/default/grub and add line: GRUB_CMDLINE_LINUX="nomodeset" save and quit then run (this will delete your windows grub configuration) grub-mkconfig -o /boot/grub/grub.cfg exit umount /mnt reboot

    Read the article

  • Kickstart installation from USB -- Kickstart location

    - by dooffas
    After managing to get a Fedora ISO to rebuild successfully (for a USB stick) after adding a kickstart file (http://serverfault.com/questions/548405/), I now have an issue with locating the kickstart file on the USB media. When this is done from a CDROM you can simply kickckstart by adding this parameter to boot: linux ks=cdrom This will kickstart (providing the kickstart file is named ks.cfg and is in the root of the disk). Now, obviously this will be different for the USB drive, so from my research, I assumed that this line would do the job: linux ks=hd:sdb1:/ks.cfg Evidently this does not work. I get an error informing me this drive is already mounted and cannot be remounted. EDIT: Actual error message: mount: /dev/sdb1 is already mounted or /run/install/tmpmnt0 busy Warning: Can't get kickstart from /dev/sdb1:/ks.cfg To test that the syntax was correct I placed the kickstart file on another USB stick and loaded the same command to grab ks.cfg from the new location: linux ks=hd:sdc1:/ks.cfg This does work (providing USB sticks are mounted in order, boot - sdb1, kickstart - sdc1). The install will kickstart and complete the install with no issue. Obviously having to use 2 pen drives is somewhat frustrating and unreliable. Is there a way around this?

    Read the article

  • Windows 32-bit and 64-bit and GPT

    - by MrLane
    I know similar questions have been asked before across several sites, but the answers at least to me have been confusing and conflicting. My understanding has always been that 64-bit Windows will create and use GPT disks just fine, but will not boot from them without a UEFI BIOS. Also my understanding WAS that 32-bit Windows could not use GPT at all and so is always restricted to 2.2TB disks, which was another reason to move to 64-bit on top of the 4GB memory limit. But I have now read that this isn't correct: 32-bit Windows will create and use GPT disks just as 64-bit does. The only resriction is that you can't boot 32-bit Windows even if you DO have a UEFI BIOS? I don't think much of the literature has explained this well. There are several tools floating around for creating virtual disks or 2.2+.8GB partition schemes and such for 32-bit systems. Why when it seems you can use GPT in 32-bit Windows anyway. It also seems that people blame MS for lagging behind with respect to all of this: but it seems the issue is with BIOS manufactures not supporting UEFI rather than MS not supporting GPT... Is my new understanding now correct?

    Read the article

  • wrt54gl reboots; troubleshooting steps?

    - by Bill
    I am using about 10 wrt54gl's in a small school. I am using a combination of stock firmware and Tomato 1.25, slowly moving towards all Tomato. We have had these devices installed for several years without problems. Recently, more and more of the units have started to spontaneously reboot, usually during high-traffic times (but not always). For the most part, the rebooting is not critical for us, but the wrt54gl's temporarily revert to 192.168.1.1 on the LAN ethernet ports and conflict with a critical server that's already installed with that IP. (Yes -- we plan to move the server off that address, but it is an involved process.) Both Tomato and the stock firmware (several versions from recent to several years old) exhibit the same problem: random reboots and reverting to 192.168.1.1 and conflicting temporarily with our server until the firmware boot process finishes. Here are my questions: Any way to prevent the wrt54gl's from reverting to 192.168.1.1 during the boot process? I was thinking of doing a custom firmware mod, although I hate to go that direction. Any steps to take in troubleshooting the reboots? Only some of the wrt54gl's reboot, which is odd. Others stay online for weeks and months without issues. Thanks.

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • Can't Get Mac Mini to turn on - no screen, no beep, only the fan, power light, and optical drive noi

    - by pibyers
    I have an Intel-based Mac mini mid-2007 (Model A1176). This is the computer my kids use so I don't use it regularly. The computer had been working fine until one day my kids told me that it no longer works. The computer will not boot up. When I turn it on the fan turns, the white power light in the front turns on, and there is a sound that appears to be from the optical drive (rather than hard drive). I don't get anything to the monitor, nor do I get any dings or other start up sounds from the computer. Here is what I've tried thus far to no avail: 1) Swapped out the monitors early on since I figured that was my weak link - no change 2) Reset the PMU - no change 3) Tried to boot up from the System Disk - The mini loaded the dvd into the drive, but nothing else (I can't eject the disk so I can put it back) 3) Start up the computer in target mode connected to another mac - I tried this too, but I never received a chime or the disk show up on the other mac. I'm about out of ideas apart from scraping the computer. Does anyone have any ideas that I can try? Again, nothing has been done to the computer in at least 6 months when I upgraded the RAM. I'm also still on Leopard. Thanks.

    Read the article

  • How to recover deleted NTFS partitions?

    - by Frank
    Last night I made a terrible mistake. I was reinstalling Windows and I accidentally deleted all the partitions on all my drives. I realized my mistake before I had created any partitions, so nothing has been written to any of the disks. I'm currently at my wits' end about what I'll do if I don't manage to recover the data. I have two 1TB drives and a 2TB. One of the 1TB was the drive I was supposed to be reformatting so nothing to be recovered there. I am currently in a Linux livecd. In this article http://support.microsoft.com/kb/245725 Microsoft advises to recreate the exact same partition but choose not to format it, and then recover the backup boot sector from the end of the ntfs volume. But none of the drives I want to recover are bootable drives. So does that mean I do not need to rewrite the boot sector? As in if I simply recreate a partition of the same size it will see all my data? Or would I be better off using the TestDisk utility? http://www.cgsecurity.org/wiki/TestDisk Please help, I'm desperate!!

    Read the article

  • Fedora 9 not reconizing hard drive

    - by Andrew Jones
    I am installing Fedora 9 to a PC (specifications at the bottom) and have had a lot of trouble with it recognising the hard drive. To get the Fedora installer to recognize it in the first place I had to pass "ata_generic.all_generic_ide=1 pci=nomsi" to the kernel, after which it installed OK. However, now when I boot the installed OS, I get a "could not find filesystem '/dev/root'" error. I tried passing the same arguments to the kernel at boot as I did when installing but to no avail. I have tried using the default LVM layout and defining manual ones but it made no difference. There is no option in the BIOS to enable AHCI or anything like that, in fact the BIOS is very limited in most respects. I can get into the system by using the installation CD in rescue mode (with those extra kernal parameters) but I'm not sure what to do once in there... Unfortunately using a more recent version of Fedora or even another Linux distribution altogether isn't an option becuase of outside constraints - which is annoying since I know for a fact Ubuntu works fine on this setup. I have not been using Linux that long, so treat me like an idiot - I am one. Any help would be greatly appreciated, thanks! System spec: Intel Atom Z530 CPU @ 1.6 GHz Intel US15W chipset 1 GB DDR2 160 GB SATA harddisk (Samsung HM16HI) 1000 Mbit/s Ethernet port Phoenix BIOS

    Read the article

  • Laptop GPU apparently blew up, motherboard doesn't even turn on its power LED. [But..]

    - by leladax
    EDIT: (Was: Laptop automatic shutdown after 2 seconds) If I take out the GPU, the motherboard LED turns on but then [if it attempts to power up and boot] it turns off after 2 seconds [fans turn on normally in that short period]. [Without the GPUs out there's not even an attempt to boot.] It's an SLI motherboard for a toshiba (model X200-219). If I take out one of the GPUs (they are on top of each other) it surprisingly lets the motherboard turn on too (as it is if both are out) but it still turns off after 2-3 seconds, same behavior. I wonder if it's the GPU that produces the 'turn off after being on' behavior and not something else. [Has anyone seen this behavior with blown up GPUs or could it be something else?] Previous question (before EDIT. Sorry, but someone thought it productive to lock the other one as duplicate): I'm trying to insvestigate which component produces this behavior. Other indications show it may be the GPU but I wonder if anyone knows more. It's a Toshiba Satellite X200 description: AC power shows the power being fed normally, when turned on the fan works and it appears to be starting up but after 2 seconds it shuts down with only the 'AC power connected" led on. -- seconds are about up to 4,maybe not 2 exactly.

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you.

    Read the article

  • Data recovery on a data HDD (no OS)

    - by aCuria
    I am helping a family member with a dead hard disk. It is a seagate 200Gb 3.5" HDD in one of those old-school external enclosures. The problem was that windows failed to detect the hard disk when plugged in through USB. I removed the hard disk from its enclosure, and plugged it into my desktop PC. The BIOS does detect it upon POST, but unfortunately windows 7 would refuse to boot. It will get stuck on the loading screen with the glowing windows logo. Safe mode doesn't help either. What options do I have before going for some professional data recovery? edit: Someone modified the Title to something completely different from what I was asking, i just changed it back. 1) 2 HDD drives, DiskA(Dead), DiskB(my OS disk) 2) when B is connected to my system, everything works fine 3) when A AND B is connected, failure to boot. POSTs fine, but windows wont load 4) A has NO OS, its PURE data. It came from an EXTERNAL HDD enclosure which doesnt belong to me, and im trying to do data recovery.

    Read the article

  • Setting up Windows SBS 2008 network on Xen

    - by samyboy
    I'm trying to install a Windows SBS 2008 server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny hosting around 10 virtual servers. Here are the settings I'm using in the hosted Windows SBS: IP address: 10.20.0.8 Network mask: 255.255.0.0 Gateway: 10.20.0.1 Note that during the installation stage, Windows set the net mask at 255.255.255.0 without letting me choose. Gross. Windows SBS tells me I have a "limited connection". I can't ping the gateway nor any other IP except localhost and it's own IP (10.20.0.8). Here is the Xen config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '4096' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' This config is working with others Windows XP domU's. I tried to change the ne2000 values with 0 and 1 with no effect. I am far from having good Windows administration skills so I guess I definitely need some help on this case. Thanks.

    Read the article

  • Switch Windows 8 from a hybrid MBR/GPT => GPT only on Macbook Pro Retina

    - by Sid
    I used DiskUtility+Bootcamp Wizard to setup my hard drive for Windows 8 (final MSDN). Somewhere in that process, the Apple tools turned my GPT disk into a hybrid MBR/GPT. All my 4 primary MBR partitions are used up, so when I try turning on Bitlocker in Windows 8, it complains about not finding a System drive. I know on Windows 8 the Bitlocker setup tries to create the 200(?)MB system partition if it's missing. However with all 4 partitions filled I suspect it can't create system drive = it can't find it = throws back an error like "BitLocker Setup could not find a target system drive. You may need to manually prepare your drive for BitLocker". I've already tried disabling hibernation, swap file etc. Now I'm thinking that if I were to get rid of the MBR scheme altogether, perhaps I can be alright within the GPT world without MBR's 4 primary partitions limit. So, how can I get rid of the MBR tables on the hybrid scheme in a manner that still leaves Mac OS and Windows 8 in working conditions? Details: Hardware is the MacbookPro Retina. Primary MBR partitions are consumed as follows: EFI partition HFS+ partition (=encrypted, therefore ="Apple_CoreStorage") HFS+ partition (Recovery partition, contains unencrypted Mac bootloader) NTFS partition (Windows8 all-in-one partition) diskutil list output sid-mbpr:~ sid$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *251.0 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_CoreStorage 160.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Microsoft Basic Data Win8 90.1 GB disk0s4 GPT vs MBR addresses sid-mbpr:~ sid$ sudo gptsync /dev/rdisk0 Password: Current GPT partition table: # Start LBA End LBA Type 1 40 409639 EFI System (FAT) 2 409640 312909639 Unknown 3 312909640 314179175 Mac OS X Boot 4 314179584 490233855 Basic Data Current MBR partition table: # A Start LBA End LBA Type 1 1 409639 ee EFI Protective 2 409640 312909639 ac Apple RAID 3 312909640 314179175 ab Mac OS X Boot 4 * 314179584 490233855 07 NTFS/HPFS Status: GPT partition of type 'Unknown' found, will not touch this disk.** **: Ignore this message, the gptsync tool is old and doesn't understand the UUID for "Apple_CoreStorage" / FileVault2 partitions. Since LBA addresses are alright, safe to ignore this message.

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • Reconfiguring, then deleting obsolete pagefile.sys from C: in one go using a batch script

    - by DanielSmedegaardBuus
    I'm trying to set up an automated script for a Windows XP installer. It's a batch script that runs on first boot after installation, and among the things I'm trying to accomplish, is removing the pagefile from C: entirely, and putting a 16-768 MB pagefile on D: instead. Here're my batch file instructions: echo === Creating new page file on D: ... cscript %windir%\system32\pagefileconfig.vbs /create /i 16 /m 768 /vo d: >nul echo. echo === Removing old page file from C: ... cscript %windir%\system32\pagefileconfig.vbs /delete /vo C: attrib -s -h c:\pagefile.sys del c:\pagefile.sys My problem is that while these are sane commands, the removal of the pagefile on C: requires me to reboot before those commands succeed.b Or, in other words — I have to first create the D: pagefile, then reboot and delete the c:\pagefile.sys file, or I'm stuck with a c:\pagefile.sys file which isn't even recognized by Windows itself (it'll just say that there's a page file on D:, and that C: has no pagefile at all). Obviously because already some pages are written to the C:\pagefile.sys file. So how would I go about accomplishing this in one go? Or, in two gos, if this is "batch scriptable" :) TIA, Daniel :) EDIT: I should probably clarify: Running those commands above are all valid, but they'll only succeed fully if I re-run the "attrib" and "del" commands at next boot. The C: pagefile is in use at the time, so I cannot delete the file it uses, and Windows itself won't remove it when I configure it to not use C: as a page file drive. Instead, it'll leave an orphaned c:\pagefile.sys file behind (which is really large). I don't necessarily need this to work in one go, registering the last two commands to run after a reboot would also be great :)

    Read the article

  • Deploying workstations - best practices?

    - by V. Romanov
    Hi guys I've been researching on the subject of workstation deployment for a while, and found a ton of info and dozens different methods and tools, but no "best practice" method that doesn't lack at least one feature that i consider required for the solution to be perfect. I'm currently interested in windows workstation deployment, but if the tools can be extended to Linux, then it's an added value. I want the deployment tools I use to be able to do the following: hardware independent - I want my image or installation to have a minimum of hardware and driver dependency, so that i can use a single image/package for all workstations easily updatable - I want to be able to update my image as easily as possible without redeploying/rebuilding/reimaging all configurations PXE bootable deployment - I want the tools to be bootable off the network so that I don't need a boot cd/DOK. scriptable for minimum human input - Ideally, the tool should run automatically after being booted and perform a "default" deployment (including partitioning) unless prompted otherwise. i.e - take a pc, hook it up, power on, PXE boot and forget about it until the OS is deployed. I found no single product or environment that does all this. Closest i came to is the windows deployment services/WIM image format. I also checked out numerous imaging and deployment tools including clonezilla, ghost, g4u, wpkg and others, but most of them lack the hardware Independence and updatability features. We currently have a Symantec Ghost server setup that does imaging over the network, but I'm not satisfied with it as it has all the drawbacks i listed above. Do you have suggestions how to optimize the process of workstation deployment? How do you deploy them in your organization? Thanks! Vadim.

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • Explorer.EXE ordinal 423 not found in urlmon.dll after updates/IE8 install

    - by Zoot
    Setting up a brand new Dell Optiplex 980 with Windows XP SP3, and everything started up fine on the first boot. My first task was to install system updates, including IE8 and WGA. After the required reboot after installing updates, I now get this error message: Explorer.EXE Ordinal not found. The ordinal 423 could not be located in the dynamic link library urlmon.dll Per my cursory Google search, this forum thread places the blame squarely on IE8. The solution provided is to enter safe mode and remove IE8. Unfortunately, when I press F8 to choose to boot safe mode, I only have the option of "Windows XP SP3 Professional" and no safe mode options. Any other ideas? Thanks in advance. FYI, I can get to the Windows Task Manager by holding down Control-Alt-Delete, but programs don't seem to run properly if you select them. I tried chatting with Dell Support, and we tried to initiate the system restore at c:\windows\system32\restore\rstrui.exe, but that had a similar "ordinal 423 not found in urlmon.dll" error.

    Read the article

  • Relax Linux - it's just me! (filesystem permissions)

    - by Xeoncross
    One of my favorite things about Linux is also the most annoying - file system permissions. In production machines and web servers I love how everything is so secure and locked down - but on development machines it really slows me down. I'll give one example out of the many that I discover weekly. Like most people, I dual-boot Ubuntu and Windows so I can continue using the Adobe CS4 suite. I often design web themes and other things while I'm still using windows. Later I'll boot into Ubuntu to take the themes and write the backend PHP for them. After mounting the windows C: drive partition I can copy the template files over so I can begin editing them. However, thanks to Linux desire to protect me I find that after coping the files I end up with a totally locked set of files where even I don't have read-write permissions. So after carful consideration about the tremendous risks that the HTML files pose to me - I chmod them so that I and apache can begin using them. Now given, the chmod process isn't that hard - but after you chmod enough files per day you get sick of doing it. I'm constantly creating, fetch, editing, and removing files from my user, git repos, php, or other random processes. This is a personal development machine after all. Everything changes on a day by day basis. So my question is, how can I get linux to relax about what I'm doing with my HTML/JS/PHP/TXT/SQL/etc. files so that I can work faster without constantly stopping to chmod things? I pinky-promise I won't hack into my account with an HTML file. ;)

    Read the article

  • How do I configure nVidia drivers on a Portable Ubuntu setup?

    - by Nicholas Flynt
    I've been pulling my hair out over this one for a couple of days now, google is no help. I've created a wonderful (until this issue) portable copy of Ubuntu linux that will boot on mostly anything by using a USB enclosure for my laptop's 80GB SATA drive. So far so good, it boots and runs on everything, and on non-nVidia card setups was even detecting the drivers, or letting me install the required drivers for hardware acceleration and compiz. Because you know, the wobble windows are the most awesome thing ever. Anyway, my desktop machine had an nVidia card, so I'm thinking, sure, I'll just install the nVidia drivers like before and everything will work happily. Not so-- now the desktop and any other nVidia cards work great, but it seems to have completely disabled any other graphics cards. When the kernel module detects that an nVidia card isn't present, it shoots up this nasty little dialog box giving me the option to boot into "low graphics" mode, which doesn't even allow me to use the correct screen resolution, much less see the installed graphics card and try to configure a driver for it. Is there any way to configure Ubuntu (with the dreaded nVidia kernel module) so that it can use nVidia's drivers when an nVidia card is present, and default to the normal (not low-graphics) setup in other cases, so that it has a fair chance of using what's actually present? I'm not afraid to much with config files, I just don't know the underlying system well enough to feel comfortable diving in without a push in the right direction. Thanks guys!

    Read the article

  • How do I run a stable Windows XP kvm guest on Ubuntu 10.04?

    - by Jean-Paul Calderone
    I have three Windows XP guests running on a recently upgraded 64-bit Ubuntu 10.04 system. Occasionally (on the order of once every few days), one of the guests will become non-responsive and the kvm process on the host which is running that guest will start consuming 100% CPU. It will continue to do so until it is killed. When restarted, it will be fine for a while, and then the issue repeats. The kvm command line used to run all three guests is this: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1 -name bigdog21vmxp1 \ -uuid ea47ff84-125b-16f7-9a4d-a6d0d8bab46a \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/bigdog21vmxp1.monitor,server,nowait \ -monitor chardev:monitor \ -localtime \ -boot c \ -drive file=/var/lib/libvirt/images/windowsxp-1.qcow2,if=ide,index=0,boot=on,format=qcow2 \ -net nic,macaddr=54:52:00:02:06:0e,vlan=0,name=nic.0 \ -net tap,fd=58,vlan=0,name=tap.0 \ -chardev pty,id=serial0 \ -serial chardev:serial0 \ -parallel none \ -usb \ -usbdevice tablet \ -vnc 127.0.0.1:1 \ -k en-us \ -vga cirrus \ -soundhw es1370 Why do the systems misbehave this way sometimes? And what configuration can I change in order to fix it? Or, if the problem is due to a bug in kvm, what is the process for isolating a kvm failure so that the developers have a chance of fixing it?

    Read the article

  • Windows Server 2003 DC hangs after network drivers update

    - by tcv
    Earlier today, we attempted to update the Broadcom BCM5716C network drivers on a Windows Server 2003. (Dell PowerEdge T310, FWIW). Since then we have not been able to boot the server in any normal mode. Safe Mode works. Safe Mode with Networking and regular bootups hang at "Applying Network Settings." I haven't tried Last Known Good Configuration nor have I tried Directory Services Restore Mode. I should also mention that the longest I've allowed "Applying Network Settings" was perhaps 30 minutes. I spoke to Dell since the server is under a basic warranty. They sent me the original Broadcom drivers. The trouble seems to be, however, that since I can only boot in Safe Mode, I can't install the application package as given. In safe mode, I receive the error: "The system administrator has set policies to prohibit this installation." I can install the drivers independently, but that doesn't allow the NICs to work. The most I've been able to get are Code 10 errors on each NIC. I plan to get back to the site tomorrow to attempt installation of a different NIC. I'm wondering what else I can try.

    Read the article

  • A tiered approach to cloning linux partitons

    - by Djurdjura
    I'm looking at a strategy for cloning Linux (root) partitions without having to use a Live CD. Literature suggests rightly that the source and target partitions must be umounted to be able to get a clean clone. This assumes that you need to use a LiveCD. I was wondering if instead of requiring a LiveCD, if using a 3rd partition that would emulate the LiveCD functionality, if we can't achieve the same functionality. In other words, at a high level a system with 3 partitions (all bootable): Rescue Partition (LiveCD emulation) Running Partition (Source) Backup Partition (Destination) All 3 partitions are LVMS. When it's time to clone the source partition to the backup (destination) partition, we would boot to the rescue partition, unmount the other 2 partitions (is it required?), run disk check on the source, copy to the destination (dd or simple copy to avoid replicating the defragmentation from the source), run disk check on the destination partition, update Grub menu list to force boot from either partition, and reboot into that partition. My question, is it an approach that you'd recommend? MBR in all this? Any gotchas or extra checks required? Thanks, D. PS. On recommendation from members, posting here instead of stackoverflow.com.

    Read the article

  • Computer locking up, looking for bootable hardware diagnostic tool.

    - by Carl Menke
    Well today I helped my friend build a computer. All went pretty well until we got to installing Win7. Thing is, we thought, it was crashing constantly. I adjusted pretty much every setting in the BIOS and removed as much hardware as possible to try and prevent a crash. No dice. So far I've tried running an Ubuntu live cd without the harddrive installed. Nope, crashed on boot. And then I just tried Microsoft's ram utility disk and it eventually locked up on that (the ram passed though). So it seems to me like it's either the CPU (AMD PhenomII x3) or the motherboard that could be bad, but I don't know how to test them individually for problems. I thought it could be a overheating issue, but the BIOS reports that the CPU temp is fine idling around 34C. Any advice or diagnostic disk that could help me out? TL;DR: Computer locks up frequently during use (cannot even boot/install an operating system), memory is fine, probably CPU or Mobo, BIOS says CPU temps are fine. What should I try?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >