Search Results

Search found 5638 results on 226 pages for 'debian sys maint'.

Page 150/226 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Ngingx wont start with fastcgi_split_path_info" error

    - by Ke
    Hi, I heard that nginx is faster and since im on a VPS with low ram i thought id try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 Anyone know what might be causing the problem? I cant find any reference to the problem on Google Also I have heard conflicting things about Nginx vs Apache. Some say use one, some say the other. Im using allsorts such as rewrite rules, proxies etc. Am I setting myself up for a fall by using Nginx? If I go for apache, does anyone know of anyway to tweak it so that it performs better on a low ram VPS? Cheers Ke

    Read the article

  • Ngingx wont start with fastcgi_split_path_info" error

    - by Ke
    Hi, I heard that nginx is faster and since im on a VPS with low ram i thought id try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 Anyone know what might be causing the problem? I cant find any reference to the problem on Google Also I have heard conflicting things about Nginx vs Apache. Some say use one, some say the other. Im using allsorts such as rewrite rules, proxies etc. Am I setting myself up for a fall by using Nginx? If I go for apache, does anyone know of anyway to tweak it so that it performs better on a low ram VPS? Cheers Ke

    Read the article

  • Security and Windows Login

    - by Mimisbrunnr
    I'm not entirely sure this is the right place for the is question but I cannot think of another so here goes. In order to login to the windows machines at my office one must press the almighty CTRL-ALT-DELETE command combo first. I, finding this very frustrating, decided to look into why and found claims from both my sys and Microsoft stating that it's a security feature and that "Because only windows could read the CTRL-ALT-DELETE it helped to ensure that an automated program cannot log in. Now I'm not a master of the windows operating system ( as I generally use *nix ) but I cannot believe that "Only windows can send that signal" bull. It just doesn't sit right. Is there a good reason for the CTRL-ALT-DELETE to login thing? is it something I'm missing? or is it another example of antiquated legacy security measures?

    Read the article

  • unknown module in my server to get PHP errors in HTML tables

    - by Javier Novoa C.
    Sorry to ask this... I manage Apache and PHP in my computer. But having installed a lot of things, I've lost track of some of them. (Things I find really useful to have at my job, or to restore in case of emergency). The problem is that I have installed this thing which displays PHP errors in a nice and colored html table, but can't remember what I have installed or configured to get it work like it. Can you give me a hint about it? I'm using Debian Lenny, Apache 2.2 and PHP 5.2 Here's a screenshot: Thank you very much for reading. Javier

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • The meaning of thermal throttle counters and package power limit notifications in Linux

    - by Trustin Lee
    Whenever I do some performance testing on my Linux-installed MacBook Pro, I often see the following messages in dmesg: Aug 8 09:29:31 infinity kernel: [79791.789404] CPU1: Package power limit notification (total events = 40365) Aug 8 09:29:31 infinity kernel: [79791.789408] CPU3: Package power limit notification (total events = 40367) Aug 8 09:29:31 infinity kernel: [79791.789411] CPU2: Package power limit notification (total events = 40453) Aug 8 09:29:31 infinity kernel: [79791.789414] CPU0: Package power limit notification (total events = 40453) I also see the throttle counters in the sysfs increases over time: trustin@infinity:/sys/devices/system/cpu/cpu0/thermal_throttle $ ls core_power_limit_count package_power_limit_count core_throttle_count package_throttle_count $ cat core_power_limit_count 0 $ cat core_throttle_count 41912 $ cat package_power_limit_count 67945 $ cat package_throttle_count 67565 What do these counters mean? Do they affect the performance of CPU or system? Do they result in increased deviation of performance numbers? (i.e. Do they prevent me from getting reliable performance numbers?) If so, how do I avoid these messages and increasing counters? Would running the performance tests on a well-cooled desktop system help?

    Read the article

  • Linux webserver tutorials (WordPress)

    - by HannesFostie
    Hi I will be setting up a linux webserver to host WordPress on. The problem is that although I know how to do it, I don't know how to properly do it. So I'm now looking for semi-advanced tutorials that are complete and secure above anything else. I don't really mind trying a new distro, but I prefer ubuntu/debian. I read this post: Any good resources for setting up a webserver in Linux ? But these are very limited. So far not a lot of luck finding good guides and howtos. This should probably be a community wiki but I can't seem to transform it myself. Thanks

    Read the article

  • Hylafax and "No response to MPS"

    - by Joril
    We have an Hylafax 5.2.5 CentOS 5 installation hosted inside a Xen virtual machine. It works quite well, but now I'm in the process of upgrading/migrating it to a KVM virtual machine running Ubuntu 10.04 and Hylafax 5.5.1 (compiled from source using http://sourceforge.net/projects/hylafax/files/hylafax%20debian%20build%20files/ ) The problem I'm having is that - while receiving works fine - sending faxes is extremely unreliable, I get lots of "No response to MPS repeated 3 tries", or "Failure to transmit clean ECM image data." The line, modem and configuration files I'm using are the same as before, so I thought that it could be a KVM scheduling issue, but even setting cpu_shares to 10240 instead of 1024 doesn't change a thing... What else could I try? Here's an example log file http://pastebin.com/cN01cpEs

    Read the article

  • Random flickering on 2.6.32 kernels after suspend

    - by whitequark
    I have XUbuntu 9.10 installed on a Toshiba NB200 netbook with Intel video card that's handled with i915 driver. With 2.6.31 stable, recommended kernel everything but WiFi works fine: Atheros ath9k WiFi shows too small signal power and loses packets in 'bursts' sometimes. With 2.6.32-* (I tested -9 to -11 from Ubuntu's kernel unstable ppa) everything works fine just prior to first suspend: echo mem >/sys/power/state. After it random unidentified fullscreen 'one-frame' flickering begins in Xorg, and after a couple of minutes everything eventually hangs while showing filled grey (not white; it is like default button colour) screen; no X keys are working: Ctrl+Alt+Fn don't, blind typing in console too. Magic SysRq still works and I was able to reboot. Also there is one out-of-tree kernel module called omnibook that is required to turn on WiFi and Bluetooth. Any advices?

    Read the article

  • Ubuntu VPS - email and webserver

    - by xZero
    I have a VPS based on Ubuntu, it has installed whole LAMP, everything needed for a web server and it works perfectly, but I'm still not able to configure it as a mail server.... I have configured MX record for my domain mail.mydomain.com and this part is OK, I also installed Postfix, Dovecot and Roundcube, configured it using this tutorial: http://ubuntuguide.org/wiki/Mail_Server_setup And after hours of configuring it doesn't work. I have experience with Linux and web hosting, but I successfully configured mail server once in past on Debian 6, and that with help from there. When I try to send email to me my gmail says: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the server for the recipient domain mydomain.com by dc147738a1117e4c12273.mydomain.com. [MY_SERVER_IP]. The error that the other server returned was: 554 5.7.1 <[email protected]>: Relay access denied Also I have Webmin control panel which works perfectly, how to configure postfix and dovecot from there? Thanks in advance.

    Read the article

  • Ngingx won't start with fastcgi_split_path_info" error

    - by Ke
    I heard that nginx is faster and since I'm on a VPS with low RAM I thought I would try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 What might be causing the problem? I can't find any reference to the problem on Google. Also I have heard conflicting things about nginx vs Apache. Some say use one, some say the other. I'm using all sorts of things such as rewrite rules, proxies etc. Am I setting myself up for a fall by using nginx? If I go for Apache: how can I tweak it so that it performs better on a low RAM VPS?

    Read the article

  • Tomcat: how to change location of manager and host-manager to a subdirectory

    - by rolandpish
    Hi there. I'm running a Tomcat 6.0.28 at port 8080 in a Debian Squeeze box. I'm a newbie in tomcat. I would like to change the location of manager and host-manager applications. That is, instead of going to: http://myserver:8080/manager/html I would like that to be: http://myserver:8080/somesubdirectory/manager/html Is this possible? If yes, how can I achieve this? I would really appreciate any help in this. I've been trying to change the context of /etc/tomcat6/Catalina/localhost/manager.xml from /manager to /somesubdirectory/manager with no success. Also I tried to create a symlink under /var/lib/tomcat6/webapps/ROOT/somesubdirectory/manager with no success. Thanks in advance. Cheers.

    Read the article

  • kerberos5 unable to authenticate

    - by wolfgangsz
    We have a Debian file server, configured to serve up samba shares, using winbind and kerberos. This is configured to authenticate against a Windows2003 DC. All worked fine until recently when I did a maintenance update on all packages. Since then, all attempts to connect to any of the shares (and also to just log into the box) fail. The logs contain this message, which seems to be at the root of the evil: [2009/09/14 12:04:29, 10] libsmb/clikrb5.c:get_krb5_smb_session_key(685) Got KRB5 session key of length 16 [2009/09/14 12:04:29, 10] libsmb/clikrb5.c:unwrap_pac(280) authorization data is not a Windows PAC (type: 141) [2009/09/14 12:04:29, 3] libads/kerberos_verify.c:ads_verify_ticket(430) ads_verify_ticket: did not retrieve auth data. continuing without PAC From there on it fails to find the user account on the DC, subsequently remaps the user to user nobody and then (rightly) refuses to grant access to the share. However, the following works just fine: wbinfo -a user%password I was wondering whether anybody has had this problem and could provide some insight. I would be happy to provide neutralised config files.

    Read the article

  • .htaccess has no effect

    - by Primož Kralj
    I am loosing hours with this (should-be) simple task. I want to restrict access to my website, which is on my server in /var/www/. I've created /etc/apache2/passwords file with httpasswd successfuly (user primoz). I've put .htaccess in /var/www/ and this is the content: AuthType Basic AuthName "RestrictedFiles" AuthBasicProvider file AuthUserFile /etc/apache2/passwords Require user primoz My website is still accessible. I also tried editing the /etc/apache2/sites-enabled/000-default - line AllowOverride None to AllowOverride All. No need to mention that it didn't make any changes. Should restricting really be this frustrating? EDIT: /etc/apache2/httpd.conf is empty by default because I run server on Debian - which uses apache2.conf instead. Here is the whole apache2.conf.

    Read the article

  • squidGuard hangs during setup

    - by richard
    I have a squid proxy on my Debian-Gnu-Linux-laptop configured to block some web sites. I can set a browser to use this proxy, but I can also configure it to not use it. As I an using it to block some sites. I do not wish and application to be able to bypass the proxy. Is it possible to to configure a fire wall to black outgoing traffic except if sent by the proxy application or user? I would like a simple configurator if possible.

    Read the article

  • OpenSolaris with no gcc vs. Nexenta with no ext3

    - by Jake Wharton
    I'm attempting to migrate my server from linux to a Solaris variant during a hardware upgrade. The machine is based around an Abit AN-M2 board which has an NForce chipset. I have what seems to be a chicken-and-egg problem of sorts: OpenSolaris 2009.06 does not recognize the NIC and I cannot compile the drivers for it as it also lacks gcc. I haven't tested as to whether or not I can mount an ext3 partition yet but its moot if there is no networking. Nexenta 3.0b3 recognizes the NIC but I cannot get the ext3 drives mounted due to FSWfspart refusing to install. I do not know much about Solaris but I wager this is due to the fact that Nexenta is based around Debian as well. While I am reusing the mobo/CPU combo, I did just spent a lot of money on the other hardware around it and would very much like to get it up and running smoothly and quickly. Does anyone have any suggestions that are not: Get a new mobo/CPU Run another OS Use alternate NIC

    Read the article

  • How can I download django-1.2 and use it across multiple sites when the system default is 1.1?

    - by meder
    I'm on Debian Lenny and the latest backports django is 1.1.1 final. I don't want to use sid so I probably have to download django. I have my sites located at: /www/ and I plan on using mod_wsgi with Apache2 as a reverse proxy from nginx. Now that I downloaded pip and virtualenv through pip, can someone explain how I could get my /www/ sites which are yet to be made to all use django-1.2? Question 1.1: Where do you suggest I download django-1.2? I know you can store it anywhere but where would you store it? Question 1.2: After installing it how do you actually tie that django-1.2 instead of the system default django 1.2 to the reverse proxied Apache conf? I would prefer it if answers were more specific than vague and have examples of setups.

    Read the article

  • Perl script segfaulting after 64-bit upgrade

    - by Brent
    I recently upgraded a 32-bit Debian server to 64-bit by re-installing, and copying my data into place. After this I have a perl script that repeats the following, and is segfaulting on the tell line: seek(FIN,$ps,0); tell(FIN, $ps); $line=<FIN>; I don't speak perl, so I'm not sure exactly what is going on here. I can get the script to run (apparently successfully) by commenting every occurrence of tell, but this is obviously not the best solution. I suspect that tell is calling a 32-bit binary or something, and that is the cause of the segfault - but I don't know. Can someone explain what tell does, and if it is indeed a separate binary, what package it belongs to (or how it is installed ie. cpan)? Or perhaps I am on the wrong track?

    Read the article

  • How can I prevent the warning No xauth data; using fake authentication data for X11 forwarding?

    - by Sorin Sbarnea
    Every time I initiate an ssh connection from my Mac to a Linux (Debian) I do get this warning: No xauth data; using fake authentication data for X11 forwarding. This also happens for tools that are using ssh, like git or mercurial. I just want to make a local change to my system in order to prevent this from appearing. Note: I do have X11 server (XQuartz 2.7.3 (xorg-server 1.12.4)) on my Mac OS X (10.8.1) and it is working properly, I can successfully start clock locally or remotely.

    Read the article

  • Firefox / Iceweasel hangs at exit

    - by mxp
    On my Debian (testing) system, I found that for some time now Firefox hangs when exiting. There is no window visible anymore and the process utilizes one CPU core to 100%. No other instances can be started until that process is killed. I tried the Basic Troubleshooting guide but that didn't help. Starting it with iceweasel -safe-mode and then choosing none of the options but just clicking "quit" caused the same behavior. Creating a new profile also didn't change anything. Any ideas what else I could try?

    Read the article

  • No resizing / Auto fit and Unity with Ubuntu 9.10 as Guest at all

    - by andyt25
    Hi! Problem 1: I have Ubuntu 9.10 installed and Workstation 7.01. All VMs (WinXP, Win7, Debian) I run do not resize any more. Problem 2: Unity Mode "Bar" works for short and disapears after seconds .. opened programs with unity mode work well, but the bar disapears History: I changed from Ubuntu 9.04 to 9.10, no VM was moved or changed. Latest VMWare Tools are installed, Auto fit Guest is activated. I am quite sure, that resizing / auto fitting and Unity Mode worked well with Ubuntu 9.04 thx for a hint, i hope didn't overlook something ..

    Read the article

  • Testing domains on intranet/local network?

    - by meder
    This may sound like a very silly question, but how could I setup domains ( eg www.foo.com ) on my local network? I know that all a domain is, is just a name registered to a name server and that nameserver has a zone record, and in the zone record there are several records of which the A Record is the most important in dictating where the lookup goes to, which machine it should point to. I basically want to make it so that I can refer to my other computer/webserver as 'www.foo.com' and make my local sites accessible by that, mess with virtualhost records in Apache and zone records for the domain except locally so I can explore and fiddle around and learn instead of having to rely on the domains I own on a public registrar that I could only access through the internet. Once again I apologize if this is a silly question, or if I'm completely thinking backwards. Background information: My OS is Debian, I'm a novice at Linux. I've done very small edits in zone records on a Bind9 Server but that's the extent of my networking experience.

    Read the article

  • Does Dynamic DNS require separate subdomains?

    - by kce
    Hello. I have a functioning DHCP/DNS (ISC Bind 9.6, DHCP 3.1.1) server running on Debian that I would like to add DynamicDNS functionality to. I have a pretty simple question: Does DynamicDNS require (or recommend) separate sub-domains? I have seen a few tutorials where the the clients that are acquiring their IP addresses and other networking information via DHCP are on a different sub-domain as the servers which are statically configured (both in terms of IP, and DNS). For example: All the clients are on ws.example.org and the servers on example.org. Right now all of our servers and clients are in the same domain (example.org) but spread across different zone files (because we have multiple subnets). The clients are configured with DHCP and the servers are configured statically. If I want to setup DynamicDNS for the clients should I use a separate sub-domain? What's the best practice here (and why or why not would it be a bad idea to do otherwise)? Thanks.

    Read the article

  • Freezing with black screen after booting OS with AMD Catalyst driver

    - by Oleg
    I have ATI Mobility Radeon HD 5470 on my HP laptop. A few weeks ago I was confused with a weird trouble in proprietary drivers for my graphics card. When I start X server (or just boot Windows) system sometimes totally hangs up with black screen. The most annoying thing is that issue appearing is absolutely random (2 success, 3 fails, 1 success, 7 fails, etc.). It appears both on Linux (Arch, Debian 7) and Windows XP. I've tried reinstalling OS, drivers, etc. I've also tried updating BIOS. Then boot was successful for the next few times and issue appeared again (probably it was just a coincidence, because another BIOS updates gave no results). I really don't know what to do. Any ideas?

    Read the article

  • HP Envy 14, Ubuntu 10.10 and trouble with the graphics cards

    - by Carsten Gehling
    A few days ago I bought a HP Envy 14, containing 2 graphics card: An integrated Intel graphics card, and an ATI HD 5650. I've installed Ubuntu 10.10 32-bit on the machine. Most things work fine out of the box, but the graphics cards are giving me trouble. When booting, I get the message "failed to get i915 symbols, graphics turbo disabled". Then the screen blanks out during the remaining boot period. I am able to get the display working by changing to one of the consoles, then closing and opening the laptop's lid. It seems that Ubuntu gets confused about which card to use. I've read here: http://www.andreas-demmer.de/en/2010/07/18/testbericht-linux-auf-dem-hp-envy-14 that I should be able to turn off one the cards by echoing keywords into /sys/kernel/debug/vgaswitcheroo/switch, but that path is not available on my system. The BIOS does not have any methods to switch of the ATI card. Help anyone? /Carsten

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >