Search Results

Search found 3140 results on 126 pages for 'debian'.

Page 113/126 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • iptables configuration under ubuntu

    - by aioobe
    I'm following a tutorial on setting up a dns-tunnel. I've run into the following instruction: Now you need to enable forwarding on this server. I use iptables to implement masquerading. There are many HOWTOs about this (a simple one, for example). On Debian, the configuration file for iptables is in /var/lib/iptables/active. The relevant bit is: *nat :PREROUTING ACCEPT [6:1596] :POSTROUTING ACCEPT [1:76] :OUTPUT ACCEPT [1:76] -A POSTROUTING -s 10.0.0.0/8 -j MASQUERADE COMMIT Restart iptables: /etc/init.d/iptables restart The problem is that I don't have any /var/lib/iptables/active. (I'm on ubuntu.) How can I accomplish this? I suspect that I should just interact with the iptables command somehow but I have no clue what to write. Best would probably be if I could put the commands in a script somehow I suppose. (A side-note. If I execute a few iptables-commands it wont be there for ever, right? The rules will be discarded on reboot?)

    Read the article

  • OpenVZ vs KVM for Linux VMs

    - by Eliasdx
    Hardware: Intel® Core™ i7-920, 12 GB DDR3 RAM, 2 x 1500 GB SATA-II HDD (no SoftRaid because Proxmox developers don't support softraid and they are sure you'll run into problems) Software: Proxmox VE with KVM and OpenVZ support and debian everywhere I want to run multiple Linux VMs on this server. One for a firewall (I want to try pfSense), one for MySQL, one VM for nginx (my stuff) and ~2 VMs with nginx for other people's web sites. I don't think that pfSense will run in an OpenVZ environment but it should run in KVM. The question is if I should setup the other VMs using KVM or OpenVZ. In OpenVZ they should have less overhead for the OS itself but I don't know about the performance. I heard that KVM is more stable but needs more RAM and CPU. I found this diagram showing a OpenVZ setup on the same hardware I'm using. This guy uses an own VM for each and every website which is running on his server. I can't think of any advantage why he's using so many VMs.

    Read the article

  • When I try to access a website without www I get access denied.

    - by madphp
    I have an apache web server on a debian machine. I'm using virtualmin to administer virtual hosts. I have two sites on this server right now, when I try to access one site without the www in the URL I get an access denied. The other site is fine. The site with the problem is a cakephp app and has the following .htaccess file in the public_html folder. <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> Below is the directives for the problem domain. SuexecUserGroup "#1001" "#1001" ServerName mydomain.net ServerAlias www.mydomain.net ServerAlias webmail.mydomain.net ServerAlias admin.mydomain.net DocumentRoot /home/mydomain/public_html ErrorLog /var/log/virtualmin/mydomain.net_error_log CustomLog /var/log/virtualmin/mydomain.net_access_log combined ScriptAlias /cgi-bin/ /home/mydomain/cgi-bin/ ScriptAlias /awstats/ /home/mydomain/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/mydomain/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/mydomain/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.mydomain.net RewriteRule ^(.*) https://mydomain.net:20000/ [R] RewriteCond %{HTTP_HOST} =admin.mydomain.net RewriteRule ^(.*) https://mydomain.net:10000/ [R] RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 31 <Files awstats.pl> AuthName "mydomain.net statistics" AuthType Basic AuthUserFile /home/mydomain/.awstats-htpasswd require valid-user </Files>

    Read the article

  • Looking for a "light" compositing manager for GNOME

    - by detly
    I have an HP Pavilion DM3 (graphics is nVidia GeForce G105M), running Debian Squeeze with GNOME 2.30. My preference for DE is Gnome + Metacity + Nautilus. I'd like to use Docky, but it requires compositing. So I'm looking for a relatively "light" compositing manager. I realise that "light" is ambiguous, but I basically want something that won't chew through my notebook's batteries because of CPU or GPU usage. I know that Metacity is capable of compositing, but as far as I'm aware it's still testing. Some people report that it's smooth and lightweight, others claim that it eats up processor time. I've also seen references to a problem with nVidia, but no actual details. I'm not averse to Compiz, but I haven't used it before and I don't know what to expect in terms of "weight." And maybe there's something else I haven't heard of. So can anyone recommend anything? Or dispel my idea that Metacity is not the right tool for the job? (Originally posted on GNOME forums.)

    Read the article

  • How to interrupt software raid resync?

    - by Adam5
    I want to interrupt a running resync operation on a debian squeeze software raid. (This is the regular scheduled compare resync. The raid array is still clean in such a case. Do not confuse this with a rebuild after a disk failed and was replaced.) How to stop this scheduled resync operation while it is running? Another raid array is "resync pending", because they all get checked on the same day (sunday night) one after another. I want a complete stop of this sunday night resyncing. [Edit: sudo kill -9 1010 doesn't stop it, 1010 is the PID of the md2_resync process] I would also like to know how I can control the intervals between resyncs and the remainig time till the next one. [Edit2: What I did now was to make the resync go very slow, so it does not disturb anymore: sudo sysctl -w dev.raid.speed_limit_max=1000 taken from http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html During the night I will set it back to a high value, so the resync can terminate. This workaround is fine for most situations, nonetheless it would be interesting to know if what I asked is possible. For example it does not seem to be possible to grow an array, while it is resyncing or resyncing "pending"]

    Read the article

  • Exim: send every emails with a predefined sender

    - by Gregory MOUSSAT
    We use Exim on our servers to send emails only from local automated users, as root, cron, etc. We have to specify every possible users into /etc/email-addresses. For example: root: [email protected] cron: [email protected] backup: [email protected] This allow us te receive every email generated. The problem is when we add a user for whatever reason (for example when we add a package, some add a user), we can forget to add this user to /etc/email-addresses. Most of the time it's not a problem, but this is not clean. And the overall method is not clean. We'd like to configure Exim to send every emails with the same source address. i.e. every sent email comes from [email protected] One way could be to use a wildcard or a regular expression into /etc/email-addresses but this is not supported. I don't currently understand Exim enought to figure out how to modify this in a way or another. Ideally, Exim should look into /etc/email-addresses first, and if no match it use the predefined address. But this is very secondary. There are two places where this address is used: 1. when Exim send the FROM: command to the smtp server 2. inside the header edit: The rewrite section is the original one from Debian begin rewrite .ifndef NO_EAA_REWRITE_REWRITE *@+local_domains "${lookup{${local_part}}lsearch{/etc/email-addresses} \ {$value}fail}" Ffrs *@ETC_MAILNAME "${lookup{${local_part}}lsearch{/etc/email-addresses} \ {$value}fail}" Ffrs .endif (comments removed)

    Read the article

  • Postfix message ID originating process?

    - by Anders Braüner Nielsen
    Last night my postfix mail server(Debian Squeeze with dovecot, roundcube, opendkim and spamassassin enabled) started sending out spam from a single domain of mine like these: $cat mail.log|grep D6930B76EA9 Jul 31 23:50:09 myserver postfix/pickup[28675]: D6930B76EA9: uid=65534 from=<[email protected]> Jul 31 23:50:09 myserver postfix/cleanup[27889]: D6930B76EA9: message-id=<[email protected]> Jul 31 23:50:09 myserver postfix/qmgr[7018]: D6930B76EA9: from=<[email protected]>, size=957, nrcpt=1 (queue active) Jul 31 23:50:09 myserver postfix/error[7819]: D6930B76EA9: to=<[email protected]>, relay=none, delay=0.03, delays=0.02/0/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta5.am0.yahoodns.net[66.196.118.33] while sending RCPT TO) The domain in question did not have any accounts enabled but only a catchall alias set through postfixadmin - most emails were send from a specific address I use frequently but some were also sent from bogus addresses. None of the other virtual domains handled by postfix were affected. How can I find out what process was feeding postfix/sendmail or more info on where they originated? As far as I can tell php mail() wasn't used and I've run several open relay tests. I did a little tinkering(removed winbind from the server and ipv6 addresses from main.cf) after the attack and it seems to have subsided but I still have no idea how my server was suddenly sending out spam. Maybe I fixed it - maybe I didn't. Can anyone help figuring out how I was compromised? Anywhere else I should look? I've run Linux Malware Detect on recently changed files but nothing found.

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • Linux VLAN Bridge

    - by raspi
    I have home network with VLANs, one for LAN, one for WLAN and one for internet. I'd like to use bridging so that instead of configuring these same VLANs to every machine, they had own VLAN ID and bridges were LAN, WLAN and internet. I've tried it but for some reason keep-alive/ttl seems to get broken because SSH sessions etc suddenly disconnects. We have this same setup working in workplace for 4+ years with 100+ customers but it's custom firewall/router hardware so accessing it is impossible. I know that it runs Linux. So what is Debian/Ubuntu default network settings doing wrong or is it just NIC driver/hw problem? I've tried to mess araund with ttl etc settings without any luck. The bad stuff is happening in the bridge because current VLAN-only setup works fine. interfaces: auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 allow-hotplug eth1 iface eth0 inet static iface eth1 inet static auto vlan111 auto vlan222 auto vlan333 auto vlan444 auto br0 auto br1 auto br2 # LAN iface vlan111 inet static vlan_raw_device eth0 # WLAN iface vlan222 inet static vlan_raw_device eth0 # ADSL Modem iface vlan333 inet static vlan_raw_device eth1 # Internet iface vlan444 inet static vlan_raw_device eth0 # LAN bridge iface br0 inet static address 192.168.0.1 netmask 255.255.255.0 bridge_ports eth0.111 bridge_stp on # Internet bridge iface br1 inet static address x.x.x.x netmask x.x.x.x gateway x.x.x.x bridge_ports eth1.333 eth0.444 bridge_stp on post-up iptables -t nat -A POSTROUTING -o br1 -j MASQUERADE pre-down iptables -t nat -D POSTROUTING -o br1 -j MASQUERADE # WLAN bridge iface br2 inet static address 192.168.1.1 netmask 255.255.255.0 bridge_ports eth0.222 bridge_stp on Sysctl: net.ipv4.conf.default.forwarding=1

    Read the article

  • Two servers, two domains, one ip. mod_proxy beginner

    - by Gutsav
    I run two virtual web servers (both running apache2 on debian). I have just one external IP, but two domains, and I want a domain going to each of the servers. I've understood that I need a Reverse Proxy, and I enabled both the mod_proxy and the mod_proxy_http modules on the "primary server". Do I need to enable anything on the "secondary server"? I also understood that I need to write some things in a virtual host file, but what? On the primary server, I have a virtual host file for one of the domains, and some for subdomains. I want domain1.tld to go to the primary server (port 80 is forwarded to it, so that works) and domain2.tld to go to the other server (internal ip 192.168.0.x). No ports needs to be forwarded to it, right? So, what to add and in which virtual host file? Or a new one? Other questions suggest adding ProxyPass and ProxyPassReverse, but I'm lost anyway, and I just don't understand the apache documentation. Thanks in advance

    Read the article

  • Are there any critical reasons why one could not use ubuntu as a server platform?

    - by Chiggsy
    We were using Lenny. ( Well Sid, really ). Had to do that for development. I upgraded my server with ubuntu 10.04, for a different project. Noticed the packages. Wearing my developer hat, it's a no brainer. Everything we need is there. I'm the admin as well. We might need more than one "box" (running on VPS for now). I do not want to build things that apt would put on for me. It's not hard, but I'm going to need that time. The debian "box" has a bunch of stuff on it, that'll have to be integrated properly, but I think we are going live in a distressingly short time. (Just found out.) I am aware of the reflexive answers to this question. What I would like to ask is are there critical bugs or critical instabilities that would make one shy away from the ubuntu/server path? I could not find any bugs that would stop me, but perhaps there is something?

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • compile ntp without ssl

    - by Zulakis
    I need to deploy ntp to a very space-critical pxe-imaging-system. (Yes, each KB matters.) Footprint needs to be as small as possible, so I want to compile ntp without linking openssl. According to the manual this is should be possible: If available, the OpenSSL library from http://www.openssl.org is used to support public key cryptography. The library must be built and installed prior to building NTP. The procedures for doing that are included in the OpenSSL documentation. The library is found during the normal NTP configure phase and the interface routines compiled automatically. Only the libcrypto.a library file and openssl header files are needed. If the library is not available or disabled, this step is not required. I already tried out ./configure --without-openssl however, this didn't help. This is my ldd output: ldd ntpd/ntpd linux-gate.so.1 => (0xb7706000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb76d5000) libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7582000) librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7578000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb741d000) /lib/ld-linux.so.2 (0xb7707000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7419000) libz.so.1 => /usr/lib/libz.so.1 (0xb7404000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb73eb000) The system I am compiling on is 32-bit debian lenny using openssl 0.9.8g-15+lenny16. What is the correct configure option to compile ntp without openssl?

    Read the article

  • Bacula virtual backup job doesn't run, no output?

    - by Zoredache
    I am trying to get Virtual Backups working, but when I try to run a virtual backup job, it appears to get created, but then never seems to actually run. I have a full, and a couple incremental backups. status director JobId Level Files Bytes Status Finished Name ==================================================================== 1283 Full 10,565 1.963 G OK 21-Dec-12 09:47 nms-Job 1284 Incr 314 129.6 M OK 21-Dec-12 09:49 nms-Job 1285 Incr 230 147.2 M OK 21-Dec-12 09:51 nms-Job 1288 Incr 525 138.8 M OK 21-Dec-12 11:25 nms-Job I attempt to start a job from bconsole like this. *run job=nms-Job level=VirtualFull Using Catalog "MySQL" Run Backup job JobName: nms-Job Level: VirtualFull Client: nms-FileDaemon FileSet: nms-FileSet Pool: nms-pool (From Job resource) Storage: File_d1 (From Pool resource) When: 2012-12-21 13:07:54 Priority: 10 OK to run? (yes/mod/no): Job queued. JobId=1291 Then my new job, just sits there, doing nothing. The JobStatus shows that the job was created, but it appears to never run? All the full, and incremental backups are terminating normally. *llist jobid=1291 JobId: 1,291 Job: nms-Job.2012-12-21_13.07.56_07 Name: nms-Job PurgedFiles: 0 Type: B Level: F ClientId: 4 Name: nms-FileDaemon JobStatus: C SchedTime: 2012-12-21 13:07:54 StartTime: 2012-12-21 13:07:56 EndTime: 0000-00-00 00:00:00 RealEndTime: 0000-00-00 00:00:00 JobTDate: 1,356,124,076 VolSessionId: 0 VolSessionTime: 0 JobFiles: 0 JobErrors: 0 JobMissingFiles: 0 PoolId: 19 PooLname: nms-pool PriorJobId: 0 FileSetId: 11 FileSet: nms-FileSet I am getting very frustrated, that this isn't working, mostly because it isn't giving me any error logs, or output at all. I submit the job, and as far as I can tell nothing happens. Is there some status, or debugging level that I can set to get a useful information about why this isn't working? What can I do to make this work? I was originally running Bacula 5.0.2 on Debian Squeeze, out of frustration, I upgraded to the 5.2.6 in the backports repository, hoping that a new version might give me better results.

    Read the article

  • ubuntu 10.04 + php + postfix

    - by mononym
    I have a server I am running: Ubuntu 10.04 php 5.3.5 (fpm) Nginx I have installed postfix, and set it to loopback-only (only need to send) The problem is it is not sending. if i issue (at command line): echo "testing local delivery" | mail -s "test email to localhost" [email protected] I get the email no problem, but through PHP it does not arrive. When I send it via PHP, mail.log shows: Mar 28 10:15:04 host postfix/pickup[32102]: 435EF580D7: uid=0 from=<root> Mar 28 10:15:04 host postfix/cleanup[32229]: 435EF580D7: message-id=<20120328091504.435EF580D7@FQDN> Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: from=<root@FQDN>, size=1127, nrcpt=1 (queue active) Mar 28 10:15:04 host postfix/local[32230]: 435EF580D7: to=<root@FQDN>, orig_to=<root>, relay=local, delay=3.1, delays=3/0.01/0/0.09, dsn=2.0.0, status=sent (delivered to maildir) Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: removed any help appreciated, my main.cf file: smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = FQDN alias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliases myorigin = /etc/mailname #myorigin = $mydomain mydestination = FQDN, localhost.FQDN, , localhost relayhost = $mydomain mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only virtual_alias_maps = hash:/etc/postfix/virtual home_mailbox = mail/

    Read the article

  • [tcpdump] Proxy delegate refusing connexion ?

    - by simtris
    Hi guys, I'm a little disapointed ! My aim was to build a VERY simple smtp proxy under debian to handle mail from a port (51234) and forward it to the standard 25 port. I compile and install a "delegate" witch can handle easily that. It's working very well like that : delegated SERVER="smtp://anotherSmtpServer:25" -P51234 The strange thing is, it's working on my virtual test machine and on the dedicated server in local but I can't manage to use it trought internet. I test it like that. telnet [mySrv] 51234 Of course, no firewal, no deny host, no ined/xined, the service delegated is listening on the right port ... 2 clues : The port is answering trought internet with nmap as "51234/tcp open tcpwrapped" have a look at the tcpdump following : 22:50:54.864398 IP [myIp].1699 [mySrv].51234: S 2486749330:2486749330(0) win 65535 22:50:54.864449 IP [mySrv].51234 [myIp].1699: S 2486963525:2486963525(0) ack 2486749331 win 5840 22:50:54.948169 IP [myIp].1699 [mySrv].51234: . ack 1 win 64240 22:50:54.965134 IP [mySrv].43554 [myIp].auth: S 2485396968:2485396968(0) win 5840 22:50:55.243128 IP [myIp] [mySrv]: ICMP [myIp] tcp port auth unreachable, length 68 22:50:55.249646 IP [mySrv].51234 [myIp].1699: F 1:1(0) ack 1 win 46 22:50:55.309853 IP [myIp].1699 [mySrv].51234: . ack 2 win 64240 22:50:55.310126 IP [myIp].1699 [mySrv].51234: F 1:1(0) ack 2 win 64240 22:50:55.310137 IP [mySrv].51234 [myIp].1699: . ack 2 win 46 The part "auth" seems suspect to me but didn't ring a bell. I could certaily do with some help. Thx a lot !

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

  • Relaying to tech "support" that computer is actually broken.

    - by Sion
    First some background: I have a Dell Inspiron 15R M050, it is still under the Dell limited warranty and the Best Buy Extended warranty. I am currently dual booting Debian Squeeze and Windows 7, the only reason I go into Windows is to play video games specifically steam games. Issue: When I play my games in Windows I am capable of playing for anywhere from 5 minutes to 2 hours before I suffer a hard-lock. I cannot alt-tab, ctrl-alt-delete, ctrl-shift-escape do anything for 2-3 minutes. After this hard-lock period everything runs fine, I can continue the game for probably another hour at least before I suffer another lock. Games: Borderlands, Splinter Cell: Chaos Theory, Starcraft 2, Garrys Mod What I have tried: Running the diagnostic suite in the dell bios, restoring the OEM Windows recovery partition on the HD, fresh installing Windows 7 Professional, updating BIOS, Calling tech support and having them run a software Hardware Diagnostics suite. The question: I think from the research that I have performed that it might be a lack of thermal paste on the CPU, would I be able to go to Best Buy and have them do a hardware diagnostic from the hardware level then have them be able to tell Dell that there is a hardware issue? Or would there be a different problem?

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Can't Install Win2k8 On KVM - Classic 0x80070013 error

    - by javano
    I am trying to install Win2k8 Std as a KVM guest on Debian Squeeze. As you can see from these screen shots; No drives are detected (I have blanked out a 20GB image for testing) - screenshot1 I am using this driver CD: - screenshot2 I have signed the Win7 driver (I assume this was the most appropriate one?) - screenshot3 I can now see an unpartitioned drive - screenshot4 But I can't create a new partition on here, getting the error code 0x80070013 - screenshot5 I have had this error code before but only on a physical server. If I remember correctly it was complaining because the disks were partitioned as GPT (because it was a server that was being re-purposed) so repartitioning with an MS-DOS table fixed that. This is a blank disk image though. What is wrong here, and how can I correct this? Thank you. UPDATE I have booted the VM with a Gparted-Live disk and formatted this volume with an MS-DOS partitioning scheme, and a single 20GB NTFS file system. Now when I boot the Win2k8 CD, load my drivers, I get a different error. As you can see at the bottom of screenshot6 "Windows cannot be installed on this hard drive space. Windows must be installed to a partition formatted as NTFS". Clicking format produces the error (0x80004005) on the screen, so I think this is still a driver issue because Windows can see the drive but not interact with it properly. Is that insane thinking?

    Read the article

  • Is there any way to force my Linux box to always boot up with a self-assigned IP address?

    - by Jeremy Friesner
    This is perhaps an unusual request: I'm trying to get a Debian Linux box to always give itself a self-assigned IP address (i.e. 169.254.x.y) on boot. In particular, I want it to do that even when there is a DHCP server present on the LAN. That is, it should not request an IP address from the DHCP server. From what I can see in the "man interfaces" text, there is an option for "manual", and an option for "dhcp". Manual assignment won't do, since I need multiple boxes to work on the same LAN without requiring any manual configuration... and "dhcp" does what I want, but only if there is no DHCP server on the LAN. (A requirement is that the functionality of these boxes should not be affected by the presence or absence of a DHCP server). Is there a trick that I can use to get this behavior? EDIT: By "no manual configuration", I mean that I should be able to take this box (headless) to any LAN anywhere, plug in the Ethernet cable, and have it do its thing. I shouldn't have to ssh to the box and edit files to get it working each time it is moved to a different LAN.

    Read the article

  • HTTP request hangs for for exactly 150 seconds, then gives incomplete response. How do I find out wh

    - by Nathan
    I am hosting a Wordpress blog, and having a strange problem. When I connect to the server (http://71.65.199.125/ at the time of this writing) it displays the Title correctly, and half of a download bar, indicating it has received some of the page, then it hangs for exactly 150 seconds (timed it twice), then it sends the rest of the page, but without the stylesheet. after that it hangs indefinitely, continuing to say "connecting..." without making any progress. If you have any clues as to what might be happening, or how I could print debug logs of PHP or something to see what it is looking for during that hang time that would probably help. recent changed I have made: switched wordpress themes, however I did see it work once with the new theme. moved the server to another building, with an identical ISP, and linksys router forwarding setup. I have also added a favicon.gif file to /var/www but without linking to it from any of the wordpress pages. I have also had a unanticipated power interruption. System info: Ubuntu debian 9.04 Apache2 PHP 5 Wordpress 2.9.2 Thank you

    Read the article

  • Remote mouse pointer not visible in VNC

    - by aef
    I used VNC desktops as a kind of collaboration server, as shared planning and pair programming environment for a long time. Now my latest iteration uses a KVM guest running Fedora 17 "Beefy Miracle", the Cinnamon desktop environment and an X11VNC server. The X11VNC server is automatically started with the desktop environment using the following command: x11vnc -localhost -many -shared -display :0 -bg My problem is that depending on the VNC client, the mouse pointer of the remote system which is shown through VNC is not synchronized to my client. I really need this, so I can see what my partner is doing on the desktop. When using Vinagre 3.2.1 on Ubuntu Oneiric Ocelot (11.10) or Vinagre 2.3.0.3 on Debian Squeeze (6.0) and I don't have my local mouse pointer inside the VNC view, I cannot see the mouse pointer of my remote system, nor its movement. When using TightVNC on Windows 7, I can recognize a mouse pointer trace for very short amounts of time after moving the mouse, but it is not clearly visible. Using UltraVNC on Windows 7 the mouse pointer is clearly visible all the time. With Gnome 2 I never had any problems with remote pointer synchronization, using exactly the same clients. I suspect this could have something to do with Cinnamon's dependency on 3D acceleration. On the other hand, it doesn't change anything to start Cinnamon's fallback environment Cinnamon 2D. Update: Same effect when I use Gnome 3.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >