Search Results

Search found 7936 results on 318 pages for 'kernel modules'.

Page 270/318 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Windows module installer delaying login, server 2008 R2

    - by Kyle
    We updated our servers this weekend (windows updates), everything went fine except one of our terminal servers now hangs at login with the message, "waiting for windows modules installer." It eventually times out and leaves an event log message that the service has stopped unexpectedly. I have disabled the service and users can now login in a reasonable time frame. However we will need to re-enable the service in order to install further updates. I'm not sure where to start with this one, I'm an entry level admin and my colleagues are on vacation today, thank God this isn't a serious problem. Further details: -It affects all users. -The only third party software on the server is our ERP software and screwdrivers from Tricerat. -The only event log message is that the service has stopped unexpectedly. -The server manager screen does not display any information about roles it just says, "error". -The remote desktop roles all seem to be functioning properly, Remote app works as well as standard RDP. Let me know if there is any further details I can provide, I will be checking this frequently throughout the day.

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • Trying to retrieve data with a thermaltake blacX enclosure: Windows 7 believes the drive to be "uninitialized"

    - by Peeter Joot
    I have a laptop that won't boot. It appears to be a power problem ... laptop auto-turns-off within about 10 seconds of pressing the power button (with power buttons lighted temporarily and no display with or without external monitor). I've followed the dell troubleshooting guide which suggested reseating the memory modules and the hard drives, but that didn't help. Before trying to have the laptop serviced, I wanted to get some data off off the hard drive. I bought a thermaltake blacX enclosure, intending to use this to both use to retrieve the drive data with, and then later use as external storage. Following the instructions (insert cables, insert drive power on) goes fine, and Windows 7 on another laptop installs the device driver software. However, no drive letter shows up in 'Computer'. Under Computer-manage-storage I see the drive is there, and there's an option to "initialize" the drive. The Windows "initialize" dialog gives me the option to pick between "MBR" and "GPT" partitioning, which sounds like a good way to destroy the data on the drive. I'm thinking that I've purchased the wrong device for the job (or that my old drive is damaged). The old drive to recover info from is a Western Digital 500G/7200rpm SATA drive if that is relavent. Both the original laptop and the one I'm using for recovery are running Windows 7. Does anybody have experience with using a blacX enclosure to recover data off an already formatted drive?

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • apache segmentation error

    - by lush
    I can't start Apache with the following errors: [root@web]# /etc/init.d/httpd start Starting httpd: /bin/bash: line 1: 19232 Segmentation fault /usr/sbin/httpd [root@web]# /usr/sbin/apachectl -k start /usr/sbin/apachectl: line 102: 19919 Segmentation fault $HTTPD $OPTIONS $ARGV I use webmin control panel and I've already tried re-installing Apache from scratch. Can someone advise what else I should try to do? Many thanks. UPDATE: The only line is always written in the error logs which seems not to be very important: [Mon Nov 14 19:00:09 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) UPDATE 2: I've recently had the error below in the logs. Looks like some modules are incompatible, so I've just disabled these extensions: fileinfo and mcrypt in my php.ini. I should be able to start the web server without them. PHP Warning: PHP Startup: fileinfo: Unable to initialize module\nModule compiled with module API=20050922, debug=0, thread-safety=0\nPHP compiled with module API=20060613, debug=0, thread-safety=0\nThese options need to match\n in Unknown on line 0 PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 UPDATE 3: [root@web]# file /usr/sbin/httpd /usr/sbin/httpd: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, stripped [root@web]# uname -m x86_64

    Read the article

  • Files showing in smbclient but not smbmount

    - by Staale
    I have a samba folder that I try and access through smbclient, and I can browse it just fine. However, mounting it through smbmount, all the folders under the share are empty. I can list the folders directly under the share fine, but they all appear empty. smbclient: # smbclient //server/share -U username -W workgroup password smbmount # sudo smbmount //server/share mntpoint -o user=username,workgroup=workgroup,password=password I have also tried with domain=workgroup instead of workgroup, both give the same result. No error messages, everything mounts fine, but all the folders under mntpoint are empty, despite the same folders being non-empty when using smbclient. Are these using different libraries? How can I debug the error? Additionally, if I try to mount //server/share/folder, doing an ls results in a segmentation fault. Using dmesg I find: kernel BUG at /build/buildd/linux-2.6.28/fs/cifs/cifs_dfs_ref.c:315! Full trace: http://pastebin.com/m70adc213 Using a credentials file, I first get empty dirs, then Resource temporarily unavailable. In my dmesg I see the following output: CIFS VFS: compose_mount_options: Failed to resolve server part of \\srv\share to IP: -11

    Read the article

  • how to get ip address of a PPP(Point-to-Point Protocol) network interface?

    - by Xsmael
    I have a Linux machine with two network interfaces, and I'd like to get the IP address of the PPP interface w1g1 but it doesn't show up in ifconfig. There is a public IP on the PPP interface, but there is no internet connection, I'm trying to troubleshoot but I need to get the IP address of the interface and I can't. ifconfig : eth0 Link encap:Ethernet HWaddr 00:30:48:8D:F0:2C inet addr:192.168.2.254 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fe8d:f02c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9970 errors:0 dropped:567 overruns:0 frame:0 TX packets:4338 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1441024 (1.3 MiB) TX bytes:915814 (894.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:675 errors:0 dropped:0 overruns:0 frame:0 TX packets:675 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50659 (49.4 KiB) TX bytes:50659 (49.4 KiB) w1g1 Link encap:Point-to-Point Protocol UP POINTOPOINT RUNNING NOARP MTU:240 Metric:1 RX packets:748994 errors:0 dropped:0 overruns:0 frame:0 TX packets:748992 errors:0 dropped:0 overruns:0 carrier:3 collisions:0 txqueuelen:100 RX bytes:179758560 (171.4 MiB) TX bytes:179758080 (171.4 MiB) Interrupt:177 Memory:f881c400-f881e3ff w1g1 is connected to a modem by an RJ45<-Serial cable and the modem is connected to the phone line. The modem is a NOKIA DNT2Mi you can see it here Routing table : 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.254 169.254.0.0/16 dev eth0 scope link default via 192.168.2.180 dev eth0

    Read the article

  • How to choose the right PHP Framework for web development?

    - by liliwang
    Hi! I'm a PHP Pro, but haven't use any PHP framework till now, so I have no clue on how to choose a PHP framework. Do you have some tips to help me choose the/a right PHP Framework? I want a stable and secure PHP framework for Projects with about 400 hours development time. It should be possible to use the framework on Shared-Hosting-Webservers. I don't need some AJAX support (I'm using extJS). It would be nice if the framework supports Rapid Application Development and object-relational mapping. Also some of the standard-functions (Authentification, form validation) would be nice. Caching would be a useful, but isn't needed. Needs for a PHP framework: Shared-Hosting-Webserver-Support for Projects between 200 und 400 hours work Developing Modell "Rapid Application Development" supported object-relational mapping supported If possible: Caching Already finished Modules (e.g. Authentification, form validation, ..) Easy to learn Which PHP framework is the right one I am seeking for?

    Read the article

  • erratic response times with Apache 2.0.52 on redhat 4.

    - by Kevin
    Under load, we've noticed response times from Apache vary greatly for the same 7k image. It can range anywhere from .01 seconds to 25 seconds or greater. Unfortunately, due to corporate policy constraints we are pretty much stuck on Apache 2.0.52. I'm at best an Apache novice so I'm in over my head with this problem. My focus recently has turned to our choice of MPM modules. We use the worker model on a dual core hyper threaded blade. It doesn't appear that swapping is an issue, and I don't see any signs of a hardware problem. I've read that worker is optimal on hardware with many CPU's where prefork it more suitable for our specific hardware profile. I can see conceptually how choosing the wrong MPM could result in this erratic behavior, but I'm not confident that it's the root cause here. Has anyone else seen this type of range in your response times for simple static content? What else should I be looking into here?

    Read the article

  • Why is my computer randomly restarting? How can I fix it?

    - by kinglime
    I have a custom built desktop computer that I've been using since about a year ago. The main specs are: ASUS P8Z68-V LE Motherboard Intel Core i7 2600k Corsair Vengeance 16GB Ram ASUS ENGTX570 GPU Corsair TX650M PSU I was running an overclock of 4.4ghz for my cpu (I have a Hyper212 EVO) and 1600mhz for my RAM but have it currently turned off due to my issues. I am also currently running Windows 8 but this problem occured in Windows 7 too. Basically my issue is that seemingly randomly, with no pattern my PC will reset itself and ASUS Anti-Surge will alert me that something went wrong. This issue is not related to system stress. I can run it fine for an hour maxxed out on Prime95 then later I can be watching a mere YouTube video when it will randomly reset. This has been occurring probably for the last two weeks and it seems to be getting worse. I believe this might be related to the power supply but when I monitor it in the bios and in Windows it appears to be putting out the proper voltages. Also possibly related or not but my Nvidia drivers frequently temporarily fail and then warn me of some kind of kernel error? If I have to buy a new power supply that is what I'll do but I want to make damn sure that's the only issue at hand. Thank you everyone in advance, please help me diagnose the issue and tell me what I can do to fix it. If you need any additional info about my setup please ask me.

    Read the article

  • How to know who accessed a file or if a file has 'access' monitor in linux

    - by J L
    I'm a noob and have some questions about viewing who accessed a file. I found there are ways to see if a file was accessed (not modified/changed) through audit subsystem and inotify. However, from what I have read online, according to here: http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html it says to 'watch/monitor' file, I have to set a watch by using command like: # auditctl -w /etc/passwd -p war -k password-file So if I create a new file or directory, do I have to use audit/inotify command to 'set' watch first to 'watch' who accessed the new file? Also is there a way to know if a directory is being 'watched' through audit subsystem or inotify? How/where can I check the log of a file? edit: from further googling, I found this page saying: http://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html The inotify API provides no information about the user or process that triggered the inotify event. So I guess this means that I cant figure out which user accessed a file? Only audit subsystem can be used to figure out who accessed a file?

    Read the article

  • How do I prevent my computer from freezing when it starts to swap?

    - by cdauth
    I work as a Java programmer, so I often have to run several programs at the same time that consume a lot of memory. When my memory is full and Linux starts swapping, my computer almost completely freezes. I can see that it is heavily writing on the hard-disk and everything reacts really slowly, often not at all. Moving the mouse in X sometimes doesn’t work at all, sometimes it has a delay of several seconds, clicking usually has a delay of several minutes. Sometimes it is possible to change to the TTY (with a long delay), there I can usually type without delay, but when I try to log in, it takes several minutes after typing in the user name until the password prompt appears, and usually an error message appears that tells me that the login timed out. So the only possibility is usually to restart the computer. I noticed that other intensive writing to the hard disk also significantly slows down my computer. Sometimes, I used rsync to limit the bandwidth when I copied files around on my own computer, as else the system would be almost unusable. How can this be? At the moment it seems more useful to me to completely turn off swapping. That might crash some processes, which is unfortunate, but the alternative at the moment is to crash all processes by turning off my computer. I am using Gentoo Linux with kernel 3.6.2-gentoo, I have a 10 GB swap partition on a HDD.

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • Can next hop address be same as destination address?

    - by Raj
    Like if host address is 100.0.0.1 and next hop address is 100.0.0.2 and destination ip address is also 100.0.0.2 Is this a valid use case? Any real life usage? <dest ip> <next hop> ip route 100.0.0.2 255.255.255.255 100.0.0.2 weight 1 next-hop-vrf GlobalRouter Above is the command on a router inside a VRF. 100.0.0.2 is pingable from host. 100.0.0.1 & 100.0.0.2 are an ip address assigned to a VLAN on host & destination respectively. On a linux box, Such configuration is valid. [root]# netstat -r -n Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 55.55.55.55 55.55.55.55 255.255.255.255 UGH 0 0 0 eth0 [root]# ip route show 55.55.55.55 via 55.55.55.55 dev eth0 As per my understanding, If a destination IP is reachable (i.e in the same subnet of host IP) we dont need a next hop. I came across one application for using next hop for destination IP in same subnet (i.e for VPN) See this: Will packets send to the same subnet go through routers? If next hop != destination IP but they are in same subnet as that of host, is a valid scenario for VPN, then i am wondering what are the applications of next_hop==dest_ip & subnet same as host? This is my first post in Super User. Extremely happy with the quick and warm response.

    Read the article

  • Centos VM serving multiple public IP: how to configure network interface?

    - by Glasnhost
    I have a Centos 5.6 VM (Vsphere client) already responding to two different public IPs on eth0 and eth0:1 and I'm trying to add eth0:2. I copied the eth0 config file and restarted the network service. I don't understand which other steps are needed... ifconfig eth0 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.10 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:163371837 errors:77 dropped:0 overruns:0 frame:0 TX packets:168210961 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1891221045 (1.7 GiB) TX bytes:855899500 (816.2 MiB) Interrupt:59 Base address:0x2000 eth0:1 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.11 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:59 Base address:0x2000 eth0:2 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.12 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:59 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:188976973 errors:0 dropped:0 overruns:0 frame:0 TX packets:188976973 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2015642664 (1.8 GiB) TX bytes:2015642664 (1.8 GiB) more /etc/resolv.conf nameserver 10.1.12.1 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.12.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.1.12.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Apache /server-status/ gives a 404 not found

    - by user57069
    I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking. first, system info: monitored server CentOS 5.3 kernel 2.6.18-128.1.1.el5 Apache/2.2.3 "server-status" directive in httpd.conf (i've cross-compared this with another system that i did a successful parallel install of Munin on, correctly showing Apache stats, and the directive below is the same for both) ExtendedStatus On <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ran lynx http://localhost/server-status got HTTP/1.1 404 taking a look at Apache access_log: 127.0.0.1 - - [13/Oct/2010:07:00:47 -0700] "GET /server-status HTTP/1.0" 404 11237 "-" "Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8e-fips-rhel5" mod_status is also loaded: % grep "mod_status" /etc/httpd/conf/httpd.conf LoadModule status_module modules/mod_status.so iptables is turned off also i did notice that the ownership status on httpd.conf on this system is root.root.. whereas the system that is displaying correctly is apache.www -- not certain that this matters?? its got to be permission issue, but i'm not certain where the permissions are messed up. any thoughts on why the test of server-status is giving me a 404?

    Read the article

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • vmware player won't run on CentOS due to missing /dev/vmmon, what could be the problem?

    - by Graphics Noob
    So I've tried installing vmware player 3.1.4 and 3.1.3 and both times had the same problem, when I try to load a VM I get the error "Could not open /dev/vmmon". When I ls /dev/ I can see there is no "vmmon" device present. When I try running: sudo /etc/init.d/vmware start I get the output: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [FAILED] which shows that the Virtual Machine Monitor fails to load. I tried following the advice on this site and ran vmware-modconfig --console --install-all I notice during the compilation there are no errors, but at the end I get the message: Starting VMware services: VMware USB Arbitrator [ OK ] Virtual machine monitor [FAILED] Virtual machine communication interface [ OK ] VM communication interface socket family [ OK ] Blocking file system [ OK ] Virtual ethernet [ OK ] Unable to start services Out of curiousity I tried: sudo /sbin/insmod /lib/modules/2.6.18-238.9.1.el5xen/misc/vmmod.ko But got the error message: insmod: error inserting 'vmmon.ko': -1 Invalid module format I have a feeling this may be the root of the problem, but I don't know what could be causing it or how to fix it.

    Read the article

  • How can I get mouse capture to work in virtualbox when I move guest window to second monitor?

    - by Dan
    I'm running VirtualBox on a Win7 host. Guest OS is Centos. My setup is a laptop (screen 1) with a huge external monitor (screen 2). I'm only telling Centos there is one monitor though. When I start up VirtualBox on screen 1, the mouse capture works fine. I don't have mouse integration because (I think) the kernel on Centos is too old to support it. That's fine, I don't mind doing the Right-Control thing. The problem I have is that when I drag the whole VM window over to my second monitor, the mouse capture doesn't work right anymore. I click inside the VM and can move the VM cursor a little bit, but I can't always get to the edges of the VM screen -- before I get all the way to an edge, the cursor will escape from the VM as if I had hit right-control. But it's still captured according to the icon, and if I then hit right-control, the guest cursor jumps to a different screen location. My workaround: if I have the VM window mostly on screen 2, but a small corner of it still on screen 1, then the mouse capture works correctly. Is there a setting to make this work better?

    Read the article

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • Debian 100% cpu every 30 minutes but not loggable?

    - by user654123
    I have a Debian 7 x64 machine running with Digital Ocean that has every 30 Minutes a 100% cpu usage for about 1 minute. A couple of days ago it stayed there for a couple of hours so the server finally crashed and I had to repair my Mysql databases. The server is a pure webserver running apache2 and Mysql. I tried tracing which processes use the cpu but with no luck. The script I used: #!/bin/sh while true; do ps -A -eo pcpu,pid,user,args | sort -k 1 -r | head -3 >> proclog.txt; echo "\n" >> proclog.txt; sleep 2; done I was monitoring htop as well while this was happening, but the top processess' cpu usage didn't add up to ~15% even though htop's cpu meter showed constant 100%. htop was configured to show all users' processess, user- and kernel-threads. Edit: By stopping Apache2 & Mysql prior to the expected 100% usage I can tell both are not responsible for it. The 100% usage occurred anyway. This is what the graph looked like the past hours:

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >