Search Results

Search found 7936 results on 318 pages for 'kernel modules'.

Page 120/318 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • RAM module randomly stopped working

    - by nhinkle
    My laptop is a Dell Inspiron e1505. It came with 2x512 MB of RAM installed, and about a year and a half ago I decided to upgrade it to 2x1 GB. I bought two 1GB memory modules from newegg and installed them, and all worked fine. Just last night my laptop was working fine; this morning I booted up and there was a BIOS warning saying "amount of system memory has changed". I tried reseating the modules, but that didn't fix it. Then I removed each one individually, and determined that one of the sticks appears to no longer be working. This happened very abruptly - I hadn't noticed any problems which might have been indicative of impending failure. Does anybody have any clue what may have caused this, and if there's any hope of making it work again? The memory only had a 30 day warranty, so I can't RMA it.

    Read the article

  • Unable to boot Ubuntu 64-bit in Virtualbox on Mac OS X

    - by Aamir
    I have latest generation Macbook Pro 7.1 (Intel Core 2 Duo) running Mac OS X 10.6.6. I installed Virtuabox 4.0.2 and tried to boot Ubuntu 10.10 64-bit iso. The boot options screen from the live CD came up: However when I continued to load the live session, or installer for that matter, I encountered the following error: This kernel requires an x86-64 CPU, but only detected an i686 CPU. Unable to boot - please use a kernel appropriate for your CPU. I am not sure if VT-x is enabled or is supported in the Core 2 Duo of my Macbook Pro. But at least, I have both I/O APIC and VT-x enabled for hardware virtualization as told in the Virtualbox manual.

    Read the article

  • Ubuntu 9.04 Cannot Connect to visible open wifi ap (reason 6)

    - by Andrew Bolster
    I'm travelling currently so the last network i connected successfully to was my home wpa-psk network. I hadn't tried anything until i got to my accommodation that is an open network (that I'm on now on the Win7 partition on my laptop). The network (and a similar archetypical 'linksys' open network, aswell as some protected local networks are correctly displayed in network-manager and upon selection, it happily spins around to its hearts content for a while before saying 'no chance boy'. /var/log/syslog spills out the usual combination of wpa_supplicant and kernel messages, the most interesting of are that the kernel deauthentication reason 6 response. 6 apparently means class2FrameFromNonAuthStation...Client attempted to transfer data before it was authenticated. Anyone seen anything like this? I've already tried going closer to the router to no avail. I don't remember seeing this any other time I've connected to a open AP, even if that AP is far away. (Signal strength for this AP is good, kismet says its around -57dBm, well above the threshold of -80dBm, and I've tried all the suggestions from the 'Related Questions'

    Read the article

  • Anyway to backup nginx before recompiling

    - by JM4
    I am looking to install the HttpGeoipModule for NGINX but learning I have to recompile the entire thing from source in order to do so. I have a new Media Temple DV 4.0 server and that comes with nginx v 1.3.0 stock and have never had to recompile from source before and a bit nervous to make changes without being able to revert to a previous state in the event something messes up (that and the fact it is affecting a live server so no idea what downtime is). My plan was to copy all the existing modules used (nginx -V to list them all and copy the modules already compiled). Then rebuild from source with the copied info above and including the ./configure --with-http_geoip_module reference. Is is possible to backup the existing nginx configuration in the event something goes wrong?

    Read the article

  • How to set up Drupal Plugin Manager on MAMP in a secure way?

    - by Andrei
    Hi, I use MAMP PRO as global webserver. First of all, is it a good idea? Secondly, my objective is to run a Drupal website with as easy management as possible. Now I want to use Plugin Manager module to install additional modules and themes for my website. It wants to use ftp for that, and I know that if I open access to FTP port then IT-department guys will come to me and ask to shut it down. So I wonder if there is a way to allow Plugin Manager to install modules, having the port 21 closed somehow?

    Read the article

  • How to use UMLFS?

    - by Vi
    I'm trying to mount what is inside UML session as FUSE filesystem on host. There's "uml_mount" program which looks like a thing for this purpose, but it fails. What is UMLFS (I haven't found any documentation at all) and how to mount it? uml_mount mounts FUSE filesystem and starts uml_mconsole <umid> umlfs <file descriptor> which tries to send this file descriptor to UML kernel (to deal with further FUSE things), but sending fails. Also I haven't found any signs of FUSE inside a kernel. Do I need some special patch for this?

    Read the article

  • Forwarding 80 to 443 on Nagios woes

    - by Ethabelle
    I perhaps just need some extra insight because I don't see where I'm going wrong. I used an SSL Cert to secure our nagios server. We want to specifically require all traffic over nagios (like 2 users, lol) to use SSL. So I thought, oh, mod_rewrite + Rewrite Rule in .htaccess, right? So I went into the DocumentRoot and did a vi .htaccess (one didn't already exist) and then I put in the following rule; RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://our.server.org/$1 [R,L] This does absolutely nothing. Does nada. Whhhyy.. Note: AllowOverride all in httpd.conf is on. Also, I verified that the module is not uncommented out ... but note, I couldn't find the mod_rewrite module installed so I copied it over from another server and placed it in modules/mod_rewrite.so . It was weird because it was enabled in the httpd.conf file, but then didn't exist in modules ... I'm a baddie :(

    Read the article

  • Linux servers seeing bad download performance behind Sonicwall firewall

    - by Joshua Penix
    I'm working with a pair of co-located CentOS Linux servers sitting behind a Sonicwall PRO 2040 Enhanced firewall running in transparent bridge mode. These servers are having a strange problem downloading files more than a few megabytes in size. For example, if I try to wget or FTP a copy of the Linux kernel from kernel.org, the first ~1-2MB will download at 600+K/s, and then throughput will drop off a cliff to 1K/s. I've reviewed all the firewall configuration settings for anything suspicious, but found nothing. More interestingly, I performed the same download with a Windows server sitting behind the same firewall, and it sailed right through at 600+K/s the whole way. Has anyone seen this? Where should I start looking to troubleshoot this problem?

    Read the article

  • No virtual console on ubuntu 12.10

    - by Buzzzz
    When I try to do a ctr-alt f(1-6) in ubuntu 12.10 I only get a black screen with a blinking cursor but no login prompt. Any ideas on what could be wrong? It is a fresh install of 12.10 using a amd radeon 5850 graphics card. i have tried different things in my /etc/default/grub but at the moment I use the following: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash vga=normal" #GRUB_CMDLINE_LINUX="vga=0x0376" #RUB_CMDLINE_LINUX_DEFAULT="vga=0x014c" #GRUB_CMDLINE_LINUX="vga=0x014c" #GRUB_GFXPAYLOAD_LINUX=1600x1200x24 # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console

    Read the article

  • How to enable CDR on AsteriskNow 1.5

    - by Michal Niklas
    I have upgraded PBX to Asterisk 1.6.2.7 and now CDR files are not created. It looks that such logging is disabled: Connected to Asterisk 1.6.2.7 currently running on pbx2 (pid = 5824) Verbosity is at least 3 pbx2*CLI> cdr show status pbx2*CLI> Call Detail Record (CDR) settings ---------------------------------- Logging: Disabled Mode: Simple Asterisk shows that CDR modules are loaded: pbx2*CLI> module show like cd Module Description Use Count cdr_manager.so Asterisk Manager Interface CDR Backend 0 cdr_csv.so Comma Separated Values CDR Backend 0 app_cdr.so Tell Asterisk to not maintain a CDR for 0 app_forkcdr.so Fork The CDR into 2 separate entities 0 func_cdr.so Call Detail Record (CDR) dialplan functi 0 cdr_custom.so Customizable Comma Separated Values CDR 0 6 modules loaded How to enable creating CDR csv files?

    Read the article

  • How to check use of userva boot option on Win 2K3 server

    - by Tim Sylvester
    I have some 32-bit Win2K3 servers running an application that fails now and then apparently due to heap fragmentation. (Process virtual bytes grows, private bytes does not) I do not have access to the source code or build process of this application. I have modified the boot.ini file on one of these servers to include /userva=2560, half way between the normal mode of operation and the /3GB option. Normally it takes weeks to reach the point of failure, but I'd like to see right away whether this has actually had any effect. As I understand it, this option limits the kernel to the remaining address space (1536MB instead of 2048), but does not necessarily give an application the extra address space, depending on the flags in the application's PE header. How can I determine whether the O/S is allowing a particular application, running in production, to access address space above 2GB? Additionally, what's the best way to monitor the system to ensure that the kernel is not starved for address space, and more generally how should I go about finding the optimal value for this setting?

    Read the article

  • How do I use memmap to reserve memory on boot?

    - by alexl
    Ive got a laptop with some corrupted ram addresses, so I'm trying to use memmap to reserve them before linux boots up. Ive been trying to use memmap=10M$1024M' as a kernel boot option, but linux crashes (with no errors) and restarts. If I use a different syntax for memmap likememmap=1023M@0M` it boots fine. Do I have to specify a certain size block to reserve or could my kernel version not support reserving memory with memmap? Maybe I'm better off using memmap=exactmap, and if so, could somebody point me to a good faq on how to use it?

    Read the article

  • kill SIGABRT does not generate core file from daemon started from crontab.

    - by Guma
    I am running CentOS 5.5 and working on server application that sometimes I need to force core dump so I can see what is going on. If I start my server from shell and send kill SIGABRT, a core file is created. If I start same program from crontab and then I send the same signal to it the server is "killed" but no core file is generated. Does any one know why is that and what need to be added to my code or changed in system settings to allow core file generation? Just a side note I have ulimit set to unlimited in /etc/profile I have set kernel.core_uses_pid = 1 kernel.core_pattern=/var/cores/%h-%e-%p.core in /etc/sysctl.conf Also my server app was added to crontab under same login id as I am running it from shell. Any help greatly appreciated

    Read the article

  • apache not starting in vagrant vm

    - by jimmyjambles
    I used Puphpet.com to create a Vagrant VM to be used for web development. The problem I am having is that the VM cannot start apache on boot. $ sudo /etc/init.d/apache2 start * Starting web server apache2 * * The apache2 configtest failed. Output of config test was: apache2: Syntax error on line 36 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/authz_default.load: Cannot load /usr/lib/apache2/modules/mod_authz_default.so into server: /usr/lib/apache2/modules/mod_authz_default.so: cannot open shared object file: No such file or directory Action 'configtest' failed. The Apache error log may have more information. the system is ubuntu 12, not sure what modifications I have to make to the puppet config to fix the problem.

    Read the article

  • information about /proc/pid/sched

    - by redeye
    Not sure this is the right place for this question, but here goes: I'm trying to make some sense of the /proc/pid/sched and /proc/pid/task/tid/sched files for a highly threaded server process, however I was not able to find a good explanation of how to interpret this file ( just a few bits here: http://knol.google.com/k/linux-performance-tuning-and-measurement# ) . I assume this entry in procfs is related to newer versions of the kernel that run with the CFS scheduler? CentOS distro running on a 2.6.24.7-149.el5rt kernel version with preempt rt patch. Any thoughts?

    Read the article

  • Multiple OS's and GRUB chainloading

    - by Kent
    Hi, I want to have multiple OS installations and I have been advised that chain loading using GRUB is a good way to handle this. I have looked at tutorials on the web but I still have some questions before I can start. I want: Windows XP: 20 GB. For running some school stuff and a game which does not work through WINE. Xubuntu 9.04: 85 GB. My main OS. Another Linux distribution: 15 GB . For experimenting and trying Linux distributions out. I will: Wipe and install various distributions quite often on the 15 Use dd to make a copy of my Windows partition after installing it and getting things to work as I like. My experience is that Windows needs to be re-installed maybe once per year to not get bloated and slow. I have been told: To use GRUB chain loading. It will make it easier when kernel upgrades are made in the Linux distributions, as they modify the GRUB boot-menu. To my understanding I need to: (I might very well be mistaken) Install Windows first. Then install Xubuntu and let it write over the MBR with GRUB (I guess this is the default). Get the GRUB on the MBR start Windows XP if I want to (it's done by default), start Xubuntu using the kernel of my choice or defer execution to the boot sector of my other Linux distribution. The actual chain loading will only occur when I want to start my experimental install of Linux. I wonder: Is step 3 above correct and a good way to handle this? Is it also a good way to use chain-loading for both Xubuntu and my experimental Linux installation? How do I get a Linux distribution to install the boot loader it comes with to the boot sector of its partition and not to the MBR? If I can't get it to not touch the MBR. Then I could make a backup of the MBR using dd and then write it back after installing my experimental Linux installation. But then, how would I get the boot loader (lets say GRUB) into the boot sector of the experimental Linux installation? How would it work if said Linux installation gets a new kernel update and needs to update the GRUB menu?

    Read the article

  • Puppet: Could not find init script for 'squid'

    - by chris
    I'm using Puppet to install ufdbGuard which requires Squid 2.7 (which is correctly installed and working properly). Here is the relevant class: class pns_client::squid { package { 'squid': ensure => present, before => File['/etc/squid/squid.conf'], } if $::ufdbguard_installed == "true" { $squidconf = 'puppet:///modules/pns_client/squid.conf_ufdbguard' } else { $squidconf = 'puppet:///modules/pns_client/squid.conf' } notify{$squidconf:} file { '/etc/squid/squid.conf': ensure => file, mode => 644, source => $squidconf, } service { 'squid': ensure => running, enable => true, hasrestart => true, hasstatus => true, subscribe => File['/etc/squid/squid.conf'], } } When running, I get this error: err: /Stage[main]/Pns_client::Squid/Service[squid]: Could not evaluate: Could not find init script for 'squid' This happens on all freshly-installed Debian 6 and Unbuntu 10.04/11.04 machines. Any ideas?

    Read the article

  • Brocade 200E Switch - Fibre Channel

    - by Arthor
    What I have: Fujitsu-Siemens PRIMERGY BX600 Brocade 200E (16 port, 4gbit fibre). My question: Imagine a QNAP with a fiber 10GBIT card connected to the Brocade 200E (16 port, 4gbit fibre). Would this work; would the card drop down to 4GBIT? Are 10GBIT fiber cards backwards completable. Update. I have the specs of my server now.... Fujitsu-Siemens PRIMERGY BX600 S3 Blade Ecosystem Blade Chassis comprising; 2 x A3C40073243 Blade Management modules 2 x A3C40089238 GBE Switch Blade SB9F 30/12 2 x A3C40085736 4Gb 10 port pass through blades 1 x A3C40083767 Digital KVM Modules 2 x A3C40073245 Fan enclosures + cooling fans 4 x A3C40073262 Power Supplies My Goals and Objectives To have a blade system in place for 8 blades for video rendering, the other 2 for database and scripts etc The system will be built on VMWARE ESXi 5 Use ISCSI on the QNAP to support HA and vmotion if needed Users to access the qnap for video editing QANAP has 12 drive (2 x (6 HDD in RAID 10)

    Read the article

  • KVM to Xen migration

    - by qweet
    I've recently been appointed to create some VMs for production use, and went gung ho into making a KVM based VM instead of finding out what our production server uses. I've only recently found out though that our own servers use Xensource OS, and don't look like they're going to be upgraded in the near future. So for the moment, I'm stuck with either two choices- attempting to convert the KVM VM into a Xen VM, or rebuilding what I have into a new Xen VM. Being the lazy person I am, I would rather not have to rebuild the VM. I've looked for some documentation on a procedure to do this, but the only thing I can come up with is an ancient article with some vague instructions. So this is my question, Server Fault- can one migrate a KVM running on a KVM kernel to a Xen kernel? And if so, how?

    Read the article

  • eTrayz: Replace base system with a bootstrapped Debian

    - by knoopx
    I bought an eTrayz NAS time ago. The device is more or less good but it ships with a closed-source custom linux and a bunch of broken web-apps. I wanted to replace the whole system with a raw Debian installation. I successfully bootstrapped a Lenny Debian into a chroot and I'm able to use use it. However I would like it to be the default system and to boot automatically at login. The device itself ships with a bundled 2.6.24.4 kernel. I think the kernel is on a dedicated flash memory so I would prefere not to re-flash it. What do you think is the best way to accomplish it?

    Read the article

  • Can a named (bind) crash make a server unreachable?

    - by giorgio79
    My server recently became unreachable, and after restart a named error was the last line I found in /var/log/messages before restart: Jun 26 00:15:06 host named[1303]: error (network unreachable) resolving 'dlv.isc.org/DNSKEY/IN': 2001:500:71::29#53 Jun 26 06:38:55 host kernel: imklog 5.8.10, log source = /proc/kmsg started. Jun 26 06:38:55 host rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1294" x-info="http://www.rsyslog.com"] start Jun 26 06:38:55 host kernel: Initializing cgroup subsys cpuset Can a named crash make a server unreachable? I doubt it, as I assume I should still be able to login with ssh via IP, but the server did not respond...So, I am trying to make heavy guesses here.

    Read the article

  • Hiding "Syntax OK" from apache2ctl output

    - by Oscar Barrett
    I am checking whether a particular apache module is installed using apache2ctl -M. When listing the modules, apache runs a syntax check on the configuration files which prints out "Syntax OK" if everything is fine. However, this message doesn't seem to be coming from STDOUT or STDERR as it shows even if all output is redirected to /dev/null. i.e. $ sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) ... Syntax OK $ sudo apache2ctl -M >/dev/null Syntax OK How is this being outputted, and is it possible to hide?

    Read the article

  • How to set up Drupal Plugin Manager on MAMP in a secure way?

    - by Andrei
    Hi, I use MAMP PRO as global webserver. First of all, is it a good idea? Secondly, my objective is to run a Drupal website with as easy management as possible. Now I want to use Plugin Manager module to install additional modules and themes for my website. It wants to use ftp for that, and I know that if I open access to FTP port then IT-department guys will come to me and ask to shut it down. So I wonder if there is a way to allow Plugin Manager to install modules, having the port 21 closed somehow?

    Read the article

  • Centos 6 - How to upgrade module located inside initramfs?

    - by anonymous-one
    We have recently upgraded our e1000e (intel ethernet) module on one of our centos 6.0 boxes. Even tho the module compile and installed fine, the old version is still being used. We have tracked this down to the fact that the e1000e.ko module is located inside the initamfs file for the booting kernel and thus, even tho the module located in /lib/modules/.... was being updated, the old one is still being loaded from inside the initramfs file. After some research, we have found that creating a new initamfs file in centos should be as simple as: /sbin/dracut <initramfs> <kernel-version> Can someone confirm that this is a safe way to basically recreate the initamfs file? This is a non-locally hosted (1000's of km away...) box, and getting support to resolve this if a reboot is unsuccessful will lead to quite a bit of down time. Thanks.

    Read the article

  • Minimum size of a boot partition on debian

    - by zebonaut
    I'm setting up an old box with Debian. First etch (4.0), because this is the last version that still had boot floppies, then the box is to be upgraded to lenny (5.0) and squeeze (6.0). Therefore, I will end up having a a couple of different kernel versions in the boot partition. If I don't want to be wasteful and if I end up needing a separate boot partition, how large should it be? I've used 10 MB long ago, but that was woody, and only one kernel in the boot partition, and this seems to be too small for what I want to do now.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >