Search Results

Search found 5849 results on 234 pages for 'partition scheme'.

Page 181/234 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Ubuntu dual boot and grub error 18

    - by srboisvert
    I've attempted to install 9.04 on an older toshiba laptop with a new 300GB drive and am getting the dreaded Grub error 18 that indicates that grub is looking beyond the bios readable area of the HD and failing. I just let ubuntu roll with its default selections when installing and ended up with this: Drive layout /dev/sda1 -ntfs 128GB -boot /dev/sda2 -extended 170GB -lba /dev/sda5 -ntfs 167.59GB /dev/sda6 -ext3 2.33GB /dev/sda7 -linux-swap 172MB I'd like make the system dual bootable without having to reinstall windows (real pain since I would have to go through an obstructionist IT dept). I know I can make windows bootable with a rescue disk and fixmbr but is there something I can do to make it dual bootable using the ubuntu livecd? Alternately, what should I have done at the partition stage of ubuntu installation to avoid this?

    Read the article

  • Deleting Time Machine in Mac OS X 10.6.4

    - by cappuccino
    Does anyone know how to delete Time Machine in Mac OS X 10.6.4? Before answering: sudo rm -rf /whateverthetimemachineis does not work Disabling the ACL permissions first with sudo fsaclctl -p /whatever -d does not work, sudo: fsaclctl: command not found Use the delete all backup feature in Time Machine... this is slow as hell, would take days. Need a command line solution. No I don't want to reformat the drive, I have other content on it, and no don't say I should have separated on two partition or two drives, I did it this say since partitions cannot be dynamically changed, and two drives is annoying since, whats the point of having a big drive?... plus has no relation to the issue at hand. Already googlied for hours and read everything on Super User, nothing working. and all solutions are the first 4. Any clues?

    Read the article

  • AWS: Multi-region setup using single RDS instance

    - by Ion
    I'm trying to scale our web application (PHP, MySQL, memcache) in a multi-region scheme. Currently we are using a setup with two EC2 instances behind an ELB and an RDS instance, all of them in US-EAST (Virginia) region. We would like to have a presence in the EU (Ireland) region as well. This means at least a new EC2 instance there (identical to the others, serving the same application). I have copied the desired AMI, setup the new instance, setup a same ELB configuration (required for SSL termination) and configured latency-based routing in Route53. And it works as suggested. But, clients from EU have speed problems. This is due to the fact that the EU EC2 instances connect to the US-based RDS instance. As far as I know Amazon has not yet enabled RDS multi-region replication. Do you have any suggestions on how to properly speed up the whole setup while using the single RDS instance? Also, any ideas in general on how to scale things up? Ideally we would like to continue using the RDS technology for various reasons. Nevertheless, I am open to suggestions (I guess the next idea would be to host our own MySQL servers).

    Read the article

  • Uninstalled Ubuntu, no GRLDR?

    - by user32965
    So I'm a big fat idiot. I installed Ubuntu 11.04 on my school's laptop, and here's come the time that I have to turn it back in. I wrote GRUB to the Master Boot Record, thinking it wasn't going to be permanent. So, fast forward to yesterday. I decided to hell with this, and popped in my Windows 7 CD, deleted the whole partition, formatted to NTFS, and installed Windows 7 on it. I'm surfing the web and my computer overheats [totally typical] I boot up, and get this: Try (hd0,0): FAT32: No GRLDR Try (hd0,1): invalid or null Try (hd0,2): invalid or null Try (hd0,3): invalid or null Try (hd1,0): NTFS5: No grldr Try (hd1,1): invalid or null Try (hd1,2): invalid or null Try (hd1,3): invalid or null Cannot find GRLDR. Press space bar to hold the screen, any other key to boot previous MBR... Timeout: 5 The timeout part just counts down to 0 from 5. I need to turn in this thing before tomorrow, please please please can someone help me out?

    Read the article

  • Take Complete Image of CRM Server Application

    - by nicorellius
    I have heard of snapshots or ghost images like this. But I have never used this kind of tool to actually clone a piece of hard drive. I think Norton Partition Magic can do something like this as well, but haven't tried it. So my question is this: How can I duplicate a CRM server application exactly so that I can transfer it to another system? I have a CRM server running LAMP (Linux, Apache, MySQL, and PHP) and I urgently need to transfer these data to another system without actually installing, configuring the dependencies and then doing the same for the software itself. Has anyone done this or does anyone know how to do this?

    Read the article

  • Video converters don't work anymore after reinstalling Windows

    - by tassiekev
    A few days ago, I decided to reinstall Windows 7 as my HD partition seemed to be nearly full and things were slowing down. I'd been using Handbrake almost exclusively to convert TV recordings and used Freemake on occasion. Following the reinstall, I can't get either to work: Handbrake says it's encoding for about 2 seconds and then says it's finished, but there are no converted files of any size. Freemake just says 'Conversion Error' and won't go any further. As an experiment I tried two programs that I don't normally use, VideoReDo & Any Video Converter. Both worked fine. Anyone got any clues?

    Read the article

  • Denying access to website via htaccess based on http header

    - by neekster
    I've been trying for ages to get this to work and I can't put my finger on it. What I'm trying to do is block access to a site from a number of countries, based on the CF-IPCountry header added by CloudFlare. I figured htaccess was a suitable way to do this. We are running LiteSpeed 4.2.4 on top of DirectAdmin for a control panel. The problem we having is the htaccess rule doesn't seem to do anything. Here's the rule we tried: SetEnvIf CF-IPCountry AU UnwantedCountry=1 Order allow,deny Deny from env=UnwantedCountry Allow from all That makes no difference at all, connections are still accepted. Just to check that the rule was at least being processed, I changed Allow from all to Deny from all, and connections were refused. So it appears to be a problem wit the variable. Here's the relevant headers that come in with the request. Connection: Keep-Alive Accept-Encoding: gzip CF-Connecting-IP: xx.xx.xx.xx CF-IPCountry: AU X-Forwarded-For: xx.xx.xx.xx.xx CF-RAY: c9062956e2d04b6 X-Forwarded-Proto: http CF-Visitor: {"scheme":"http"} Zone-Name: xx.com.au Hopefully someone can help me out, this has been driving me nuts for too long. Thanks

    Read the article

  • ExtX file system on my usb key

    - by yves Baumes
    Hi all, if I format my usb key with an extX file system, copy some files on it and then give it to a friend for him to add files or modify existing one on this key, then he is rejected by its own system. Because its User ID (UID) nor GID are the same as mine on my machine. How to get rid of this limitation? Is it possible to disable user rights on a ext2/ext3 partition? Of course, I would really like not to rely on any other file system.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • Moving data to lower sectors on an SSD without defragging

    - by David Freitag
    I bought an Intel SSD drive a while back and now I want to dual-boot with it. But for some reason there are sectors near the end of the drive filled and I can’t seem to find a way to remove the data so that I can safely shrink the partition. I know I have sectors near the end full because I am using Defraggler to analyze my drive (not to defrag it). I can see what files need to be moved/deleted but short of actually deleting some drivers and/or other necessary files, I am completely stuck. This the diskmap: I am only able to shave off that last 1.72GB of space from the drive which isn’t even enough for the most minimal Linux install.

    Read the article

  • Make UEFI, GPT, Bootloader, SSD, USB, Linux and Windows work together

    - by user129552
    I like to use the latest hardware and the latest software; thus I have a Laptop (Lenovo X220) with UEFI instead of BIOS an SSD instead of an HDD GPT partitioning scheme instead of MBR USB to boot from instead of optical disks. I need to use both Windows and Linux. I tried to make them work alongside, but I didn't succeed. Most Linux distribution isos don't even really work on UEFI systems booted from USB. (Not even the self-claimed cutting-edge Fedora. I also tried Linux Mint Debian Edition and Sabayon Linux (according to this guide) which did not work. Only Ubuntu worked for me. I first installed Windows 8 which created sda1: Recovery, sda2: EFI system, sda3: msftres, sda4: NTFS Windows. Windows worked without a problem. I then created sda5: linux-swap and installed Ubuntu into sda6: btrfs. After rebooting, I was not presented GRUB2 as expected, but instead my system just booted into Ubuntu. I could no longer access Windows. After fixing dpkg in btrfs Ubuntu, I followed the Ubuntu documentation on UEFI booting. The result left me with a broken GRUB2, but interestingly, when I wanted to select the device to boot from, I was not only presented the internal SSD, an attached USB device, or LAN, but also Grub2 (broken), Ubuntu and Windows. The result is not very satisfying to me. What would I have to do to fix everything? Or differently asked, what operating system should I install at what point given my possibilities and requirements, so that I have a working bootloader in my UEFI GPT system which presents me a working Linux and Windows.

    Read the article

  • How do I recover files from a corrupt VDI file?

    - by Eric P
    Is it possible to repair a corrupt VDI file? The OS on the VDI (XP) doesn't boot at all, it just hangs at a black screen. I was getting file errors before on its last boot, but now its not working at all. Sector viewer shows 'Invalid partition table Error loading operating system Missing operating system'. I tried mounting the file from the host OS, but it just says that the drive isn't formatted. I don't need to be able to run the VDI, but I do need some files that are on it. Is there any way to recover files from the corrupt VDI file?

    Read the article

  • How to dual OS 32-bit/64-bit Windows 7 Ultimate.

    - by Cyril Horad
    I have a problem with regards to my nVidia driver not running on 64-bit. I decided to install both 32-bit and 64-bit on my ASUS K42JC (4GB RAM upgrade) in order to function the nVidia on the 32-bit. My question is, how could I make my laptop run on either 32-bit or 64-bit OS. What options I am suppose to use, a single, double, or triple partition? From an answer: Well. When I installed the nVidia driver from either the ASUS site and the prescribed driver from NVIDA site via System Requirements Lab, both ended up freezing my laptop to the point when the desktop is about the finish booting. I have tried three(3) times reformatting and trying to fix the problem. Yet no use. I filed a ticket to the Asus support but for now no replies yet. But this bothers me, why wouldn't the nVidia run on 64bit yet it runs perfectly on 32bit.

    Read the article

  • My 4 GB microSD card only allows me to use 1 GB

    - by James Litewski
    My phone came with a 4 GB microSD card. On the card it lists that 3 GB goes to Muve Music which is Cricket's music program, and I get 1 GB... Well, I don't pay for Muve Music, so why waste the space? I thought I'd be able to simply buy an adapter and reformat the microSD card to get the full 4 GB; but that wasn't the case... I could only find the 1 GB partition on the card. I even tried reformatting the disk, but I had no luck. How can I get the full 4 GB? BTW, I'm running Mac OS X v10.7 (Lion).

    Read the article

  • Mount encrypted hfs in ubuntu

    - by pagid
    I try to mount an encrypted hfs+ partition in ubuntu. An older post described quite good how to do it, but lacks the information how to use encrypted partitions. What I found so far is: # install required packages sudo apt-get install hfsprogs hfsutils hfsplus loop-aes-utils # try to mount it mount -t hfsplus -o encryption=aes-256 /dev/xyz /mount/xyz But once I run this I get the following error: Error: Password must be at least 20 characters. So I tried to type it in twice, but that results in this: ioctl: LOOP_SET_STATUS: Invalid argument, requested cipher or key (256 bits) not supported by kernel Any suggestions? Thx Edit: One thing I'm not sure about is whether I use the right password. My assumption is that it is my default one for these situations. But I'm not sure whether Max OSX choose another password (internally) for that.

    Read the article

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

  • Double Click to open Office docs is slow, File -> Open is fast.

    - by Keith
    I have 2 unique networks. They both share similar architecture: Windows 2003 SBS SP2 Running Symantec Endpoint Running Symantec Information Foundation Shared drives off a data partition Clients running Office 2003 or 2007 Connect to file server through mapped drives When users try to open a file from their local PC by double clicking, it will take 30-60 seconds to open. When they do File - Open, those same documents open up almost immediately. So far I've tried the following - CCleaner to parse the registry of outdated mapped drives - Disabled "using DDE" - Disabled A/V - Reboot Any ideas beyond that? Figured this question belongs here instead of SU since its the same issue on different networks.

    Read the article

  • Linux kernel startup problems: how to analyze?

    - by java.is.for.desktop
    Hello, everyone! After manually updating the kernel from 2.6.33 to 2.6.34 on my OpenSuse 11.2 Notebook, it stops after the message Loading drivers, configuring devices... This stop can be interrupted with Ctrl-C, but when the system enters runlevel 5, no partitions are mounted (but the root partition), many services fail to start, and other strange things are going on. No X11. NOTE: I manually updated the kernel many times before, it worked. Yes, I know, in case of NVidia, the driver has to be recompiled. The question is: How can I analyze the cause of the problem? Doing dmesg gives me soooo much output, I can't "map" it to the output which I see at startup. The output does not contain the string Loading drivers, configuring devices, or similar.

    Read the article

  • Alerting when a RAID Array disk fails locally on VMWare ESX or ESXi System

    - by Tim K
    With ESX and ESXi, we recently had two systems where that the boot partition became degraded due to a failed disk. The only alert we managed to capture was the visual alert on the Dell servers. We failed to received any electronic alerts regarding the failed or degraded array. Does anyone have any experience with monitoring for these types of failures? In both cases, the servers were running in a RAID 5 SCSI configuration (5 disks on one system, 3 disks on another) which if we were running a Windows Server OS, we would have had an alert created in the Eventviewer. Where would I begin to look for this solution. Can it be configured in VCenter or vFoglight?

    Read the article

  • dual boot install--no GRUB

    - by Jim Syyap
    My computer recently had a hardware upgrade and now runs on Windows 7. I decided to install Ubuntu 11.04 as dual boot using the ISO I got from ubuntu.com downloaded onto my USB stick. Restarting with the USB stick, I was able to install Ubuntu 11.04 choosing the option: Install Ubuntu 11.04 side by side with Windows 7 (or something like that). No errors were encountered on installation. However on restarting, there was no GRUB; the system went straight into Windows 7. Looking for answers, I found these: http://essayboard.com/2011/07/12/how-to-dual-boot-ubuntu-11-04-and-windows-7-the-traditional-way-through-grub-2/ http://ubuntuforums.org/showthread.php?t=1774523 Following their instructions, I got: Boot Info Script 0.60 from 17 May 2011 ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Syslinux MBR (3.61-4.03) is installed in the MBR of /dev/sdb. => Grub2 (v1.99) is installed in the MBR of /dev/sdc and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos7)/boot/grub on this drive. sda1: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /grldr /bootmgr /Boot/BCD /grldr sda2: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe sdb1: __________________________________________________ ________________________ File system: vfat Boot sector type: SYSLINUX 4.02 debian-20101016 ...........>...r>....... ......0...~.k...~...f...M.f.f....f..8~....>2} Boot sector info: Syslinux looks at sector 1437504 of /dev/sdb1 for its second stage. SYSLINUX is installed in the directory. The integrity check of the ADV area failed. According to the info in the boot sector, sdb1 starts at sector 0. But according to the info from fdisk, sdb1 starts at sector 62. Operating System: Boot files: /boot/grub/grub.cfg /syslinux/syslinux.cfg /ldlinux.sys sdc1: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows XP Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdc2: __________________________________________________ ________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdc5: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: sdc6: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: sdc7: __________________________________________________ ________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdc8: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: Going back into Ubuntu and running sudo fdisk -l , I got these: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002f393 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 19458 156185600 7 HPFS/NTFS Disk /dev/sdb: 2011 MB, 2011168768 bytes 62 heads, 62 sectors/track, 1021 cylinders Units = cylinders of 3844 * 512 = 1968128 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2ab9 Device Boot Start End Blocks Id System /dev/sdb1 * 1 1021 1962331 c W95 FAT32 (LBA) Disk /dev/sdc: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00261ddd Device Boot Start End Blocks Id System /dev/sdc1 * 1 60657 487222656+ 7 HPFS/NTFS /dev/sdc2 60657 121600 489527681 5 Extended /dev/sdc5 120563 121600 8337703+ 82 Linux swap / Solaris /dev/sdc6 120073 120562 3930112 82 Linux swap / Solaris /dev/sdc7 60657 119584 473328640 83 Linux /dev/sdc8 119584 120072 3923968 82 Linux swap / Solaris Should I proceed and do the following? Assuming Ubuntu 11.04 was installed on device sdb1, do this: sudo mount /dev/sdb1 /mnt Then do this: sudo grub-install--root-directory=/mnt /dev/sdb Notice there are two dashes in front of the root directory, and I'm not using sdb1 but sdb. Since the command in step 15 had reinstalled Grub 2, now we need to unmount the /mnt (i.e. sdb1) to clean up. Do this: sudo umount /mnt Reboot and remove Ubuntu 11.04 CD/DVD from disk tray. Log into Ubuntu 11.04 (you have no choice but it will make you log into Ubuntu 11.04 at this point). Open up a terminal in Ubuntu 11.04 (using real installation, not live CD/DVD). Execute this command: sudo update-grub Reboot the machine.

    Read the article

  • Why doesn't Spotlight find Applications on OS X Server?

    - by Clinton Blackmore
    On our Mac OS X Server 10.5.x boxes, using spotlight (ie. the magnifying glass in the top corner) does not find applications and utilities, but it does on Mac OS X client (and so, we all use the keyboard shortcut and end up frustrated -- it either gives us nothing, or, without us realizing it until later, an application from another partition.) It isn't clear to me if we've done something strange setting up our servers, but they are all like this. Any idea what caused it and how we fix it? Everything (including Applications) are set to show up in a spotlight search in System Preferences).

    Read the article

  • Where are Wireless Profiles stored in Ubuntu

    - by LonnieBest
    Where does Ubuntu store profiles that allow it to remember the credentials to private wireless networks that it has previously authenticate to and used? I just replaced my Uncle's hard drive with a new one and installed Ubuntu 10.04 on it (he had Ubuntu 9.10 on his old hard drive. He is at my house right now, and I want him to be able to access his private wireless network when he gets home. Usually, when I upgrade Ubuntu, I have his /home directory on another partition, so his wireless profile to his own network persists. However, right now, I'm trying to figure out which .folder I need to copy from his /home/user folder on the old hard drive, to the new hard drive, so that he will be able to have wireless Internet when he gets home. Does anyone know with certainty, exactly which folder I need to copy to the new hard drive to achieve this?

    Read the article

  • Is there a way to do a Windows 7 repair install when you are unable to start/boot Windows 7?

    - by irrational John
    My understanding is that the only way to perform a "repair install" in Windows 7 is to run the install setup.exe within the Windows 7 installation you want to repair. This seems a little brain dead to me since usually the reason I wanted to perform the repair install was because the existing installation was so broken that I could no longer boot and use it. It seems Microsoft is saying my only option in that case is to do a clean install and then reinstall all my apps. So I'm wondering if anyone knows of a way to perform a Windows 7 repair install ... one that preserves your existing OS settings and application installs ... on a Windows 7 partition that cannot be booted.

    Read the article

  • How can I install Gentoo on Virtual Box//(Ubuntu 12.04)

    - by Curious Apprentice
    I'm googling for a while about how can I install Gentoo on Virtual Box. The hand book provides less information about installing it on virtual box rather on a real partition. I thought there will be a GUI tool to install Gentoo. [Now I think there is not :(] Whenever I'm booting into gentoo Im going into a LiveDVD environment where fdisk returning "command not found !" (Not sure this is a bug or Im using a wronng command) Now Im not a very exprienced user but do like to learn and play with Gentoo. Any help link will be appriciated. Downloaded File: livedvd-x86-amd64-32ul-2012.iso (Do I need to use Gentoo 64 as OS version in Virtual Box ?)

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >