Search Results

Search found 541 results on 22 pages for 'squeeze my lime'.

Page 18/22 | < Previous Page | 14 15 16 17 18 19 20 21 22  | Next Page >

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • xt_TCPMSS: bad length messages

    - by Matic
    Hey! I'm getting loads of messages like: Jun 23 10:24:20 awakening kernel: [ 1691.596823] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663362] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663495] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663588] xt_TCPMSS: bad length (1448 bytes) Jun 23 10:24:21 awakening kernel: [ 1692.663671] xt_TCPMSS: bad length (1440 bytes) Jun 23 10:24:26 awakening kernel: [ 1697.062914] xt_TCPMSS: bad length (474 bytes) Jun 23 10:24:26 awakening kernel: [ 1697.305525] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:27 awakening kernel: [ 1698.946633] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:36 awakening kernel: [ 1707.481198] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:37 awakening kernel: [ 1708.723526] xt_TCPMSS: bad length (805 bytes) Jun 23 10:24:38 awakening kernel: [ 1709.599461] xt_TCPMSS: bad length (805 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.211052] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.260588] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:41 awakening kernel: [ 1712.976058] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:43 awakening kernel: [ 1714.225209] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:43 awakening kernel: [ 1714.914961] xt_TCPMSS: bad length (1492 bytes) Jun 23 10:24:55 awakening kernel: [ 1726.192696] xt_TCPMSS: bad length (1480 bytes) Jun 23 10:24:55 awakening kernel: [ 1726.192825] xt_TCPMSS: bad length (1480 bytes) In my dmesg/syslog. This linux machine is among other things used as an internet gateway. Connection is over PPPoE. I have the following line in my iptables script: $IPT -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu # PPPoE fix The frequency of this messages increased 10x when I upgraded from Debian lenny with 2.6.27 to squeeze with 2.6.32 few days ago. Why am I seeing this messages and how can I fix them?

    Read the article

  • Debian: Should I add vlan interface into bridge for KVM?

    - by javano
    I am setting up a Debian Squeeze box as a KVM host. I want to add multiple interfaces to each KVM guest so I want them to be on different VLANs. After reading about this, I believe the best method is to add multiple logical VLAN (sub)-interfaces to the physical NICs and then create a bridge adapter for each VLAN interace, and assign each bridge as a NIC for KVM guests. Does this make good sense, or madness? Do I have to use bridged interfaces with KVM like this? Can't I just add eth1.xx and eth1.yy to my interfaces config below and then configure those directly as bridged KVM guest NICs? If so, how should this look in the interfaces config file below? user@host:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # Management Interface auto eth0 iface eth0 inet static address 172.22.0.31 netmask 255.255.255.0 gateway 172.22.0.1 # Interface for guest VMs auto eth1 # Guest1 : Use VLAN 117 auto eth1.117 iface eth1.117 inet manual # Set up br1 for guest 1, bridging with vlan 117 auto br1.117 iface br1.117 inet manual bridge_ports eth1.117 bridge_stp off user@host:~$ uname -a Linux hostname 3.4.9 #1 SMP Wed Aug 22 19:08:46 BST 2012 x86_64 GNU/Linux UPDATE I would really like it if someone could clarify the config for me, as I have also seen the above configured with this syntax, so I don't see why one would be preferred over the other; # Interface for guest VMs auto eth1 allow-hotplug eth1 iface eth1 inet static # Vlan 117 for guest 1 auto vlan 117 iface vlan111 inet static vlan_raw_device eth1 # Guest 1 : NIC 1 auto br1.117 iface br1.117 inet manual bridge_ports vlan117 bridge_stp off

    Read the article

  • Debian DNSSEC - howto secure a domain?

    - by Daniel Marschall
    I have a beginner question about DNSSEC. I have much experience with TLS and cryptography-stuff and would like to try out this new technology. I have googled very much about this but I haven't found useful information for me. I think one confusion in information gathering is that "Debian howto DNSSEC setup" can mean "How to USE DNSSEC for resolving" OR "How to secure your domain with DNSSEC". I am searching the second. I am running a Debian Squeeze server with root privileges which has a domain name ending with ".de" (which is already signed by the root zone). The network interface at this server uses the gateway IP (DNS resolver?) of the datacentre the server is running on. My domain is hosted at freedns.afraid.org , where I can add DNS RRs for my domain. They are currently NOT capable of adding DNSSEC RRs, but I am bugging them to support this soon. ;-) My simple question is: How do I setup DNSSEC on Debian? Resp. who have I ask to? As far as I understand, all I have to do is to run dnssec-keygen on my Debian server and then add the key to my DNS-provider as DNSSEC RR. (And change it every 30 days?) I have looked at this http://www.isc.org/files/DNSSEC_in_6_minutes.pdf but it looks like you have to be the owner of a ZONE, so I don't think this applies to me. Who needs to sign my domain? My DNS-provider or my zone (DeNIC) or can I do it myself? Any help is very appreciated!

    Read the article

  • Uninstall php5 installed from source

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • gitweb- fatal: not a git repository

    - by Robert Mason
    So I have set up a simple server running debian stable (squeeze), and have configured git. Using gitolite, I have all functionality (at least the basic clone/push/pull/commit) working. Installation of gitweb went without any issues. However, when I access gitweb, I get a gitweb screen without any repos listed. # tail -n 1 /var/log/apache2/error.log [DATE] [error] [client IP_ADDRESS] fatal: Not a git repository: '/var/lib/gitolite/repositories/testrepo.git' # cd /var/lib/gitolite/repositories/testrepo.git # ls branches config HEAD hooks info objects refs Here is what I see in /var/lib/gitolite/projects.list: testrepo.git And in /etc/gitweb.conf: # path to git projects (<project>.git) $projectroot = "/var/lib/gitolite/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = "/var/lib/gitolite/projects.list"; # stylesheet to use $stylesheet = "gitweb.css"; # javascript code for gitweb $javascript = "gitweb.js"; # logo to use $logo = "git-logo.png"; # the 'favicon' $favicon = "git-favicon.png"; What is missing?

    Read the article

  • Anyone else experiencing high rates of linux server crashes today?

    - by Bron Gondwana
    Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21.

    Read the article

  • apache using mod_auth_kerb always asks for the password twice

    - by DrStalker
    (Debian Squeeze) I'm trying to set apache up to use Kerberos authentication to allow AD users to log in. It is working, but prompts the user twice for a username and password, with the first time being ignored (no matter what is put it in.) Only the second prompt includes the AuthName string from the config (i.e.: the first windows is a generic username/password one, the second includes the title "Kerberos Login") I'm not worried about integrated windows authentication working at this stage, I just want users to be able to login with their AD account so we don't need to set up a second repository of user accounts. How do I fix this to eliminate that first useless prompt? The directives in the apache2.conf file: <Directory /var/www/kerberos> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms ONEVUE.COM.AU.LOCAL Krb5KeyTab /etc/krb5.keytab KrbServiceName HTTP/[email protected] require valid-user </Directory> krb5.conf: [libdefaults] default_realm = ONEVUE.COM.AU.LOCAL [realms] ONEVUE.COM.AU.LOCAL = { kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL master_kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL admin_server = SYD01PWDC01.ONEVUE.COM.AU.LOCAL default_domain = ONEVUE.COM.AU.LOCAL } [login] krb4_convert = true krb4_get_tickets = false The access log when accessing the secured directory (note the two seperate 401's) 192.168.10.115 - - [24/Aug/2012:15:52:01 +1000] "GET /kerberos/ HTTP/1.1" 401 710 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - - [24/Aug/2012:15:52:06 +1000] "GET /kerberos/ HTTP/1.1" 401 680 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - [email protected] [24/Aug/2012:15:52:10 +1000] "GET /kerberos/ HTTP/1.1" 200 375 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" And one line in error.log [Fri Aug 24 15:52:06 2012] [error] [client 192.168.0.115] gss_accept_sec_context(2) failed: An unsupported mechanism was requested (, Unknown error)

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

  • recursive grep started at / hangs

    - by Martin
    I have used following grep search pattern on multiple platforms: grep -r -I -D skip 'string_to_match' / For example on FreeBSD 8.0, FreeBSD 6.4 and Debian 6.0(squeeze). Command does a recursive search starting from root directory, assumes that binary files do not have the 'string_to_match' and skips devices, sockets and named pipes. FreeBSD 8.0 and FreeBSD 6.4 use GNU grep version 2.5.1 and Debian 6.0 uses GNU grep version 2.6.3. On FreeBSD 6.4, last information printed to stderr was "grep: /dev/cuad0: Device busy". After this grep just idles as according to "top -m io -o total" the I/O usage of grep is nonexistent. Same behavior is true under FreeBSD 8.0, but last information sent to stderr is "grep: /tmp/.wine-0: Permission denied" on my installation. In case of Debian, last output to stderr is "grep: /proc/sysrq-trigger: Input/output error". If I check the I/O usage of grep process under Debian, it is following: root@Debian:~# iotop -bp 22439 Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / ^Croot@Debian:~# What might cause this? Is there a way to view which file grep is currently processing in case lsof is not present? I'm able to use lsof under Debian and looks like the problematic file name there is "0xc6b2c230 file struct, ty=0, op=0xc0d34120". I'm not sure what this is.. I'm not able to use lsof or fstat under FreeBSD. PS: I know I could use find utility, but this is not the question.

    Read the article

  • NGINX Configuration Error using Codex Example: Is This a Typo in Codex?

    - by jw60660
    I installed NGINX using this tutorial: C3M Digital NGINX Tuturial but after reading this article on security issues with "cut and paste" configuration tutorials: Neal Poole's article regarding security and NGINX configuration I decided to follow Poole's suggestion to use the configuration suggested in the WordPress codex: Codex on NGINX Configuration I used the Codex configuration for a multisite installation using W3 Total Cache. When attempting to start NGINX I get an error saying that the /etc/nginx/nginx.conf test failed. The error message was: "Restarting nginx: nginx: [emerg] unknown directive "//" in /etc/nginx/sites-enabled/teambrazil.com:18" When I looked at my site specific configuration at that path I noticed the rewrite rule in the server block was: rewrite ^ $scheme://teambrazil.conf$request_uri redirect; That line in the Codex example was: rewrite ^ $scheme://mysite.conf$request_uri redirect; That looked like a mistake to me, and I changed my line to: rewrite ^ $scheme://teambrazil.com$request_uri redirect; I then attempted to restart NGINX but got the same error message. My question is: is that a mistake, and is there anything more I have to do aside from restarting NGINX after making this change. As suggested by both tutorials I set up the directories: /etc/nginx/sites-enabled and /etc/nginx/sites-available and created the appropriate symbolic links using: touch /etc/nginx/sites-available/teambrazil.com ln -s /etc/nginx/sites-available/teambrazil.com /etc/nginx/sites-enabled/teambrazil.com Is there something else I need to consider after making this correction? Or was it not an error in the first place? I'm pretty stuck here. BTW, I am using Debian squeeze as an OS on Amerinoc's VPS. I'm just getting familiar with VPS administration and am pretty much a noob. Thanks very much, would appreciate any input.

    Read the article

  • What is the peak theoretical WiFi G user density? [closed]

    - by Bigbio2002
    I've seen a few WiFi capacity planning questions, and this one is related, but hopefully different enough not to be closed. Also, this is related specifically to 802.11g, but a similar question could be made for N. In order to squeeze more WiFi users into a space, the transmit power on the APs need to be reduced and the APs squeezed closer together. My question is, how far can you practically take this before the network becomes unusable? There will come a point where the transmit power is so weak that nobody will actually be able to pick up a connection, or be constantly roaming to/from APs spaced a few feet apart as they walk around. There are also only 3 available channels to use as well, which is a factor to consider. After determining the peak AP density, then multiply by users-per-AP, which should be easier to find out. After factoring all of this in and running some back-of-the-envelope calculations, I'd like to be able to get a figure of "XX users per 10ft^2" or something. This can be considered the physical limit of WiFi, and will keep people from asking about getting 3,000 people in a ballroom conference on WiFi. Can anyone with WiFi experience chime in, or better yet, provide some calculations for a more accurate figure? Assumptions: Let's assume an ideal environment with no reflection (think of a big, square, open room, with the APs spaced out on a plane), APs are placed on the ceiling so humans won't absorb the waves, and the only interference are from the APs themselves and the devices. As for what devices specifically, that's irrelevant for the first point of the question (AP density, so only channel and transmit power should matter). User experience: Wikipedia states that Wireless G has about 22Mbps maximum effective throughput, or about 2.75MB/s. For the purpose of this question, anything below 100KB/s per user can be deemed to be a poor user experience. As for roaming, I'll assume the user is standing in the same place, so hopefully that will be a non-issue.

    Read the article

  • Hardware chose: ASUS Eee Pad Slider or ASUS Eee Pad Transformer for web development?

    - by JamesM
    I was just wondering out of the following Tablets which one seams better to get? I am a web-developer, Always using Unix/Linux/BSD, I want a tablet that has a keyboard. http://gdgt.com/asus/eee/pad/slider/ http://gdgt.com/asus/eee/pad/transformer/ http://www.tweaktown.com/news/18311/asus_eee_pad_slider_transformer_tablets_with_physical_keyboard/index.html I know both are similar, but not sure what one I should get. The Slider seems very nice but again the keyboard is fixed to the tablet unlike the Transformer. P.S: I'm going to use one of the above to showcase my programming work at school, as well as just being used as a cheaper notebook than the $300 Windows.7 locked down notebooks. By Locked down, I mean we pay $300 for them and after 3 years we can do what ever to them, they are Lenovo thinkpad mini-10 and What they have installed is all you get, they don't let us install what ever OS on them. And with the question on both of those links, I think that the transformer would be better but that is only taking in the fact of it being both a tablet and a notebook. What I really care about is power; which one is more powerful? It will be running kFreeBSD-Debian-Squeeze with Linux-Mint theme with several other packages. Though I'm not going to run Windows (which I feel is bloated), I still want power. To help keep my computer from slowing down with cache, I will have a cron.d/hourly script cleaning out the cache memory.

    Read the article

  • How to make Synaptics touchpad work better on Linux?

    - by whitequark
    I have Debian Squeeze currently installed on a Samsung N250 netbook with a Synaptics touchpad. These touchpads are, generally, good, and everything works perfectly on Windows. The support is extremely sucky on Linux through. Of course it has all the cool features like two-finger scrolling, but the cursor (or whatever is a replacement for cursor when scrolling) is trembling awfully. It trembles when I just keep the finger on touchpad, it shakes awfully if the finger is close to the top of touchpad, and when I'm scrolling with it (no matter with two fingers or one), the page shakes a lot too. None of this behavior is observed even in Windows XP with just the default drivers installed. Here's the Xorg version: $ Xorg -version X.Org X Server 1.7.7 Release Date: 2010-05-04 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.32-5-686 i686 Debian Current Operating System: Linux mannaz 2.6.32-5-686 #1 SMP Fri Dec 10 16:12:40 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/vmlinuz-2.6.32-5-686 root=/dev/mapper/mannaz-root ro quiet splash Build Date: 02 December 2010 01:08:37AM xorg-server 2:1.7.7-10 (Julien Cristau <[email protected]>) Current version of pixman: 0.16.4 and here is synclient -l output: http://pastebin.com/Eqa6hGXP

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • Tomcat OutOfMemory after switching JVM

    - by Mathias
    I have a Tomcat6 server running on Debian squeeze there are 4 webapps running on it, and an in-JVM ActiveMQ server. It has been running for about a year with the same memorysettings, with openjdk-6. Everything has worked dandy, no issues at all. Now, for various reasons, i need to try out Sun's JDK. So, I installed sun's jvm with standard apt-get apt-get install sun-java6-bin , and switched using update-java-alternatives -s java-6-sun However, when i start tomcat, i get outofmemory, the server won't even start! If i switch back to openJDK, all works fine again. I haven't had any memory issues on this server before, so it feels strange that the server suddenly won't start with sun's JDK. Anybody have any clue as to why this might happen? Have i missed something? I naturally have set heap sizes etc. in tomcat. Currently running with: -Xms256m -Xmx1024m Which as mentioned works in openSDK, outofmemory in sun-jdk... EDIT: server is 64-bit, openJDK and Sun are 1.6.0, both 64-bit JVMs.

    Read the article

  • Looking for a "light" compositing manager for GNOME

    - by detly
    I have an HP Pavilion DM3 (graphics is nVidia GeForce G105M), running Debian Squeeze with GNOME 2.30. My preference for DE is Gnome + Metacity + Nautilus. I'd like to use Docky, but it requires compositing. So I'm looking for a relatively "light" compositing manager. I realise that "light" is ambiguous, but I basically want something that won't chew through my notebook's batteries because of CPU or GPU usage. I know that Metacity is capable of compositing, but as far as I'm aware it's still testing. Some people report that it's smooth and lightweight, others claim that it eats up processor time. I've also seen references to a problem with nVidia, but no actual details. I'm not averse to Compiz, but I haven't used it before and I don't know what to expect in terms of "weight." And maybe there's something else I haven't heard of. So can anyone recommend anything? Or dispel my idea that Metacity is not the right tool for the job? (Originally posted on GNOME forums.)

    Read the article

  • MySQL consuming all system memory on INSERT ... SELECT

    - by siete
    The mysql daemon is getting killed because Linux is reaching out of memory: Oct 24 07:41:23 <hostname> kernel: [82297.673701] Out of memory: kill process 13816 (mysqld) score 1839626 or a child There is a link with some workaround on this. That only happen when executing a query INSERT ... SELECT with a very huge resulset. MySQLTuner script displays that maximum theorical memory is less than 8GB, but top and munim shows that is getting over all RAM and swap available: [--] Total buffers: 560.0M global + 72.2M per thread (100 max threads) [OK] Maximum possible memory usage: 7.6G (43% of installed RAM) I'm tried to tune some options with not results, there are the relevant ones: skip-locking max_connections = 100 key_buffer_size = 512M max_allowed_packet = 32M table_open_cache = 2000 open_files_limit = 3000 sort_buffer_size = 16M read_buffer_size = 16M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 4 query_cache_size = 16M query_cache_limit = 2M thread_concurrency = 4 join_buffer_size = 32M tmp_table_size = 32M max_heap_table_size = 32M query_cache_limit = 8M bulk_insert_buffer_size = 64M myisam_max_sort_file_size = 50GB myisam_mmap_size = 10GB And there is a system resume: OS: Linux Debian "Squeeze" 6.0.8 (upgraded yesterday) RAM: 18GB Swap: 18GB MySQL: 5.1.72-2 (official Debain release) At this moment, update or change OS or MySQL version is not possible, there is any option that can help and i missed? Sorry by my english, and thank you in advance! Edit: I'm only using MyISAM tables, and cannot change to InnoDB.

    Read the article

  • Postfix message ID originating process?

    - by Anders Braüner Nielsen
    Last night my postfix mail server(Debian Squeeze with dovecot, roundcube, opendkim and spamassassin enabled) started sending out spam from a single domain of mine like these: $cat mail.log|grep D6930B76EA9 Jul 31 23:50:09 myserver postfix/pickup[28675]: D6930B76EA9: uid=65534 from=<[email protected]> Jul 31 23:50:09 myserver postfix/cleanup[27889]: D6930B76EA9: message-id=<[email protected]> Jul 31 23:50:09 myserver postfix/qmgr[7018]: D6930B76EA9: from=<[email protected]>, size=957, nrcpt=1 (queue active) Jul 31 23:50:09 myserver postfix/error[7819]: D6930B76EA9: to=<[email protected]>, relay=none, delay=0.03, delays=0.02/0/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta5.am0.yahoodns.net[66.196.118.33] while sending RCPT TO) The domain in question did not have any accounts enabled but only a catchall alias set through postfixadmin - most emails were send from a specific address I use frequently but some were also sent from bogus addresses. None of the other virtual domains handled by postfix were affected. How can I find out what process was feeding postfix/sendmail or more info on where they originated? As far as I can tell php mail() wasn't used and I've run several open relay tests. I did a little tinkering(removed winbind from the server and ipv6 addresses from main.cf) after the attack and it seems to have subsided but I still have no idea how my server was suddenly sending out spam. Maybe I fixed it - maybe I didn't. Can anyone help figuring out how I was compromised? Anywhere else I should look? I've run Linux Malware Detect on recently changed files but nothing found.

    Read the article

  • Debian Stable: Can't update kernel, libc won't update.

    - by pascal
    I use Debian Stable (squeeze) on a virtual host where I can't touch the kernel, it's stuck (and will be for some time as support told me) at Linux 2.6.18-028stab070.3 #1 SMP Wed Jul 21 18:33:27 MSD 2010 x86_64 So when I try to update, several packages fail with FATAL: kernel too old for example Preparing to replace libgcc1 1:4.6.0-11 (using .../libgcc1_1%3a4.6.1-1_amd64.deb) ... Unpacking replacement libgcc1 ... Setting up libgcc1 (1:4.6.1-1) ... FATAL: kernel too old Segmentation fault dpkg: error processing libgcc1 (--configure): subprocess installed post-installation script returned error exit status 139 and some version chaos ensued: The following packages have unmet dependencies: libc-dev-bin : Depends: libc6 (> 2.13) but 2.11.2-13 is installed libc6 : Depends: libc-bin (= 2.11.2-13) but 2.13-5 is installed libc6-dev : Depends: libc6 (= 2.13-5) but 2.11.2-13 is installed libquadmath0 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed libstdc++6 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed locales : Depends: glibc-2.13-1 What should I do? I want to keep the system up-to-date, so I want to pin as few packets as possible, but I also don't want to have to compile anything manually. Trying to pin the status quo and figured out where the error came from: ldconfig segfaults. -v doesn't print anything more so I can't tell what's the actual problem. # ldconfig FATAL: kernel too old

    Read the article

  • How to interrupt software raid resync?

    - by Adam5
    I want to interrupt a running resync operation on a debian squeeze software raid. (This is the regular scheduled compare resync. The raid array is still clean in such a case. Do not confuse this with a rebuild after a disk failed and was replaced.) How to stop this scheduled resync operation while it is running? Another raid array is "resync pending", because they all get checked on the same day (sunday night) one after another. I want a complete stop of this sunday night resyncing. [Edit: sudo kill -9 1010 doesn't stop it, 1010 is the PID of the md2_resync process] I would also like to know how I can control the intervals between resyncs and the remainig time till the next one. [Edit2: What I did now was to make the resync go very slow, so it does not disturb anymore: sudo sysctl -w dev.raid.speed_limit_max=1000 taken from http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html During the night I will set it back to a high value, so the resync can terminate. This workaround is fine for most situations, nonetheless it would be interesting to know if what I asked is possible. For example it does not seem to be possible to grow an array, while it is resyncing or resyncing "pending"]

    Read the article

  • How can I make an encrypted email message into a .p7m file?

    - by Blacklight Shining
    This is a bit complicated, so I'll explain what I'm really trying to do here: I have a Debian server, and I want to automatically email myself certain logs every week. I'm going to use cron and a bash script to copy the logs into a tarball shortly after midnight every Monday. A bash script on my home computer will then download the tarball from the server, along with a file to be used as the body of the email, and call an AppleScript to make a new email message. This is where I'm stuck—I can't find a way to encrypt and sign the email using AppleScript and Apple's mail client. I've noticed that if I put a delay in before sending the message, Mail will automatically set it to be encrypted and signed (as it normally does when I compose a message myself). However, there's no way to be sure of this when the script runs—if something goes wrong there, the script will just blindly send the email unencrypted. My solution there would be to somehow manually create a .p7m file with the tarball and message and attach it to the email the AppleScript creates. Then, when I receive it, Mail will treat it just like any other encrypted message with an attachment (right?) If there's a better way to do this, please let me know. ^^ (Ideally, everything would be done from the server, but there doesn't seem to be a way to send mail automatically without storing a password in plaintext.) (The server is running Debian squeeze; my home computer is a Mac running OS X Lion.)

    Read the article

  • Bacula virtual backup job doesn't run, no output?

    - by Zoredache
    I am trying to get Virtual Backups working, but when I try to run a virtual backup job, it appears to get created, but then never seems to actually run. I have a full, and a couple incremental backups. status director JobId Level Files Bytes Status Finished Name ==================================================================== 1283 Full 10,565 1.963 G OK 21-Dec-12 09:47 nms-Job 1284 Incr 314 129.6 M OK 21-Dec-12 09:49 nms-Job 1285 Incr 230 147.2 M OK 21-Dec-12 09:51 nms-Job 1288 Incr 525 138.8 M OK 21-Dec-12 11:25 nms-Job I attempt to start a job from bconsole like this. *run job=nms-Job level=VirtualFull Using Catalog "MySQL" Run Backup job JobName: nms-Job Level: VirtualFull Client: nms-FileDaemon FileSet: nms-FileSet Pool: nms-pool (From Job resource) Storage: File_d1 (From Pool resource) When: 2012-12-21 13:07:54 Priority: 10 OK to run? (yes/mod/no): Job queued. JobId=1291 Then my new job, just sits there, doing nothing. The JobStatus shows that the job was created, but it appears to never run? All the full, and incremental backups are terminating normally. *llist jobid=1291 JobId: 1,291 Job: nms-Job.2012-12-21_13.07.56_07 Name: nms-Job PurgedFiles: 0 Type: B Level: F ClientId: 4 Name: nms-FileDaemon JobStatus: C SchedTime: 2012-12-21 13:07:54 StartTime: 2012-12-21 13:07:56 EndTime: 0000-00-00 00:00:00 RealEndTime: 0000-00-00 00:00:00 JobTDate: 1,356,124,076 VolSessionId: 0 VolSessionTime: 0 JobFiles: 0 JobErrors: 0 JobMissingFiles: 0 PoolId: 19 PooLname: nms-pool PriorJobId: 0 FileSetId: 11 FileSet: nms-FileSet I am getting very frustrated, that this isn't working, mostly because it isn't giving me any error logs, or output at all. I submit the job, and as far as I can tell nothing happens. Is there some status, or debugging level that I can set to get a useful information about why this isn't working? What can I do to make this work? I was originally running Bacula 5.0.2 on Debian Squeeze, out of frustration, I upgraded to the 5.2.6 in the backports repository, hoping that a new version might give me better results.

    Read the article

  • Relaying to tech "support" that computer is actually broken.

    - by Sion
    First some background: I have a Dell Inspiron 15R M050, it is still under the Dell limited warranty and the Best Buy Extended warranty. I am currently dual booting Debian Squeeze and Windows 7, the only reason I go into Windows is to play video games specifically steam games. Issue: When I play my games in Windows I am capable of playing for anywhere from 5 minutes to 2 hours before I suffer a hard-lock. I cannot alt-tab, ctrl-alt-delete, ctrl-shift-escape do anything for 2-3 minutes. After this hard-lock period everything runs fine, I can continue the game for probably another hour at least before I suffer another lock. Games: Borderlands, Splinter Cell: Chaos Theory, Starcraft 2, Garrys Mod What I have tried: Running the diagnostic suite in the dell bios, restoring the OEM Windows recovery partition on the HD, fresh installing Windows 7 Professional, updating BIOS, Calling tech support and having them run a software Hardware Diagnostics suite. The question: I think from the research that I have performed that it might be a lack of thermal paste on the CPU, would I be able to go to Best Buy and have them do a hardware diagnostic from the hardware level then have them be able to tell Dell that there is a hardware issue? Or would there be a different problem?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22  | Next Page >