Search Results

Search found 11924 results on 477 pages for 'openoffice org'.

Page 429/477 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Server freezes at XX:25

    - by Karevan
    We've ordered a 50 euro/month server on hetzner.de, it has debian OS. The problem is that server is freezing in random time of the day and nothing appears in log. Only hardware reboot helps. Part of the log file while it was freezing: Aug 17 22:38:26 Debian-60-squeeze-64-minimal pure-ftpd: ([email protected]) [INFO] New connection from 95.211.120.220 Aug 17 22:38:26 Debian-60-squeeze-64-minimal pure-ftpd: ([email protected]) [INFO] Logout. Aug 17 22:39:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22828]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 17 23:09:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22835]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 17 23:17:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22842]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 17 23:39:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22847]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: imklog 4.6.4, log source = /proc/kmsg started. Aug 18 09:47:47 Debian-60-squeeze-64-minimal rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="1229" x-info="http://www.rsyslog.com"] (re)start Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Initializing cgroup subsys cpuset Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Initializing cgroup subsys cpu Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Linux version 2.6.32-5-amd64 (Debian 2.6.32-45) ([email protected]) (gcc version 4.3.5 (Debian 4.3.5$ Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-2.6.32-5-amd64 root=/dev/md2 ro As you can see, it appears only the fact of starting. No. theres no way to look in server's console right after when it freezes, sadly. Datacenter supporters do not really want to help about that. Server has been installed 30th july, times and dates of freezes are down there: 6 august, 0:25 18 august, 2:27 21 august, 1:25 26 august, 23:26. We decided that freezing around ??:25 isn't a hardware fault, and decided to reinstall the OS. Later, 31 august, our admin backed up all files, reinstalled Debian, and restored the backup. But then, 7 september, server went down again, at 5:05. We thought it was related to Anyone else experiencing high rates of Linux server crashes during a leap second day? and turned ntp off. But then the server went down twice again, 21 september, 17:29 and 24 september, 20:27. I called all linux admins I knew to help with solving it and they said everything is fine about configuring OS and it could be hardware only. But they dont know why it always freezes at XX:25-30. Maybe some of you know about something related to that?

    Read the article

  • Suggestions for sharing and using data between Ubuntu and Windows 7 dual boot

    - by wizjany
    Note: TL;DR, scroll to bottom for summary. I recently set up my computer for a dual boot between Ubuntu 9.10 and Windows 7. My current drive setup is as follows. | A1 | A2 | | B1 | B2 | B3 | A1: 100 mb, windows 7 "System Reserved" boot partition A2: 230 gb, data section, this one needs to be shared between the operating systems B1: 125 gb, windows 7 OS B2: 123 gb, ubuntu OS B3: 2gb, linux swap space Pretty much i want to have my documents, music, pictures, videos, etc accessible from both operating systems. My first attempt involved making the data (a2) partition NTFS, and moving my home folder from ubuntu to the data partition. However, as I read NTFS does not work nice with permission, and it messed up my home folder. My next idea is one of the following: 1) format the data partition to ext2/3/4 and move my home folder from linux there, and get a driver to read ext partitions in windows 7. The problem with that is that most of the ext drivers/software are not compatible with windows 7 or do not integrate with windows explorer (I really don't want to open a separate software window just to access my data, plus it's probably not compatible with other software.) http://www.fs-driver.org/ looks promising, but I'm not sure how it works with ext4 and windows 7 (not officially supported, when trying in vista compatibility mode, it tells me I need to format the ext drive to use it). My next idea, 2) keep the home folder in ubuntu where it is, but create symlinks for the Documents, Music, etc folders to an NTFS formatted Data (A2) partition, and add those locations to the windows 7 libraries. I'm not totally sure how the permissions would work out, but it should be fine since it's only the documents, music, etc and not the important config files in the rest of /home/user/. Correct me if I'm wrong. Currently, symlinks is my best idea, although i'm not sure how it will work. Any suggestions, additions to my ideas, links, pointers, whatever would be greatly appreciated. Even if it means i should reformat both my drives and repartition (2 250gb drives if you want to suggest a setup for that), I won't be too opposed if that's the best suggestion (I've gone through the format/install/format/reinstall process 5 times over the past 3 days, once more won't hurt me). TL;DR, summary: I have two hard drives. One is partitioned for Ubuntu and Windows 7, the second one I want accessible to both operating systems to store documents, music, pictures, videos, etc. Suggestions on how to set up the data drive please =) P.S. bonus if I can get an apache server document root folder working between the two OS's as well (permissions could become very complicated, so don't worry too much about that) P.P.S. Related question, but data viewing is one way: http://superuser.com/questions/84586/partition-scheme-and-size-for-dual-boot-windows-7-and-ubuntu-9-10-with-separate-p

    Read the article

  • how to notify a program of another program? dll? directory? path?

    - by Brady Trainor
    I am trying to experiment with GNUS email in Emacs, in Windows (EDIT: x64 bit). I've got it to work in Ubuntu, but struggling with it in Windows. From http://www.gnu.org/software/emacs/manual/html_mono/emacs-gnutls.html#Help-For-Users I read in second paragraph: This is a little bit trickier on the W32 (Windows) platform, but if you have the GnuTLS DLLs (available from http://sourceforge.net/projects/ezwinports/files/ thanks to Eli Zaretskii) in the same directory as Emacs, you should be OK. I have downloaded and unzipped the gnutls-3.0.9-w32-bin package, but am not sure what to do with it. I have tried putting it in Program Files (x86), which is "the same directory as Emacs". I have tried putting it in the emacs-24.3 folder. I consider merging all the folders in between the two, but am hesitant as that seems a difficult troubleshoot attempt compared to my knowledge on these matters. I think Emacs needs to somehow see the gnutls binaries and/or dlls. My knowledge is limited on this. I've also struggled to understand PATHs for sometime now, and am not sure if that approach is relevant here. FYI, the emacs directory contains folders labeled bin, etc, info, leim, lisp and site-lisp. The gnutls directory contains folder labeled bin, include, lib and share. Hmm, now I'm finding lots of links on adding paths. Still, I'm skeptical that I would only add gnutls.exe path, as it seems the dlls are needed. Some additional data for Ramhound's first comment I have been attempting the (require 'gnutls) route. This seems to be the most relevant parts in the log: Opening connection to imap.gmail.com via tls... gnutls.c: [1] (Emacs) GnuTLS library not found Opening TLS connection to `imap.gmail.com'... Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com'...failed Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com --protocols ssl3'...failed Opening TLS connection with `openssl s_client -connect imap.gmail.com:993 -no_ssl2 -ign_eof'...failed Opening TLS connection to `imap.gmail.com'...failed I am not sure what "in stallion" means. Emacs seems to have installed itself in program files (x86), so I assume it is 32 bit. I can try and figure out how to double check, but did not realize I would get such fast response time, and am headed out right now. I will try merging the files later tonight?

    Read the article

  • Subversion - Retrieval of mergeinfo unsupported

    - by jamesthomson
    Hi, I've recently updated my Subversion package on Debian Etch to 1.5.1 via a back-port. I've gone through what I believe are all the appropriate steps but cannot for the life of me get past the following error message when I try to merge: Retrieval of mergeinfo unsupported by '.' The '.' isn't important as I get the same message whether I'm SSH'd on to the server or using TortoiseSVN through Windows. I'll take you through what I did to upgrade and test step by step: Update of Subversion Added the following line to /etc/apt/sources.list: deb http://www.backports.org/debian etch-backports main contrib non-free and then ran apt-get -s -t etch-backports install subversion Checked the version of the subversion installation Done this by running svnadmin --version and got the following output: svnadmin, version 1.5.1 (r32289) compiled Dec 11 2008, 18:10:14 Checked the client too using svn --version and got the following svn, version 1.5.1 (r32289) compiled Dec 11 2008, 18:10:14 Ok, so all looking good so far. Now I just need to upgrade the repository. After plenty of research, the most foolproof way to do this seemed to be to dump the repository and then load it again. So here's what I did: svnadmin dump /var/svn/repo > repo.dump rm -aR /var/svn/repo/* svnadmin create /var/svn/repo svnadmin load < repo.dump All that seemed to work fine. I then checked to see if the repository had been upgraded by looking at the contents of /var/svn/repo/db/format which gave: 3 layout sharded 1000 Again this indicated a Subversion 1.5 repository so all looking good. Now I try and do a merge using the Subversion client in Debian: svn mergeinfo https://mysvn/repo . and I get the following error: svn: Retrieval of mergeinfo unsupported by '.' I get the same error message whether I'm using the Debian shell on the same server or if I'm connecting via TortoiseSVN and a Windows box. If I browse to the repository using my web browser, the version number at the bottom reads: Powered by Subversion version 1.4.2 (r22196). In case it helps, the created date on mod_dav_svn.so is 2009-08-06 18:29 I just cannot figure out why I'm getting this message so any help pointing me in the right direction would be greatly appreciated. All the forum and mailing list posts that I found relating to this error were solved by doing an svnadmin upgrade, though I have actually tried that and still no joy. Thanks in advance, James.

    Read the article

  • Installing Cygwin C and C++ compilers for NetBeans IDE 7.2

    - by user1294663
    I am very new to Cygwin, C, C++ and NetBeans IDE 7.2. My PC is running MICROSOFT WINDOWS 7 OS. I have read the documentation on how to install the Cygwin C C++ compilers. http://netbeans.org/community/releases/72/cpp-setup-instructions.html#compilers I have tried to run Cygwin setup.exe that has the most recent version of the Cygwin DLL is 1.7.16-1. I am not very sure which exact package to install when the Cygwin setup.exe installer prompted for the selection of packages to download and install. I want to install the Cygwin C and C++ compilers so that i can create C and C++ projects using NetBeans 7.2 I selected those packages that has contains the following names gcc, g++, gdb and make. Then i proceed on to install the selected packages The installation took up a long time so i stopped after about 45 minutes or so. I browsed the installation folder and i saw some packages i selected were installed. I noticed that some packages came in some sort of "zip" file with tar.gz extension. i added the folder path into the PATH variable in the windows 7 environment variables window. I think this command works C: cygcheck -c cygwin but the rest doesn't work i think. C: gcc --version C: g++ --version C: make --version C: gdb --version I tried to create the C C++ project using the Netbeans IDE 7.2 and the IDE pops out a dialog message saying that there was no c c++ compilers found. Have i made some mistake here? like installing the wrong packages or something else??? Are there packages shown in the Cygwin setup.exe installer that contains exact names and exact version that is compatible with NetBeans IDE 7.2?? This i am not too sure. Because i i think i didn't really see some required packages with exact names and versions. My question is : Which exact packages do i install using the Cygwin setup.exe installer so that i can create C & C++ projects using Netbeans IDE 7.2? and what other steps do i have to take note to ensure complete successful installation? do i have to wait all the selected required packages to be installed? I WOULD LIKE TO KNOW THE EXACT NAMES AND THE VERSIONS FOR THE REQUIRED PACKAGES (NAMES AND VERSIONS DISPLAYED IN THE CYGWIN SETUP.EXE INSTALLER WHEN PROMPTED) NEEEDED FOR C & C++ PROGRAMMING USING NETBEANS IDE 7.2??

    Read the article

  • Log - Server kernel: INFO: task httpd:000000 blocked for more than 120 seconds

    - by valter
    Almost everyday my server is crashing due to hight server load, and even restarting apache or mysql can't solve the problem. I need to reboot the server to solve, or it crash again due to the high load. The log system records something like this when it crashes: Aug 11 18:33:53 server kernel: INFO: task httpd:20008 blocked for more than 120 seconds. Aug 11 18:33:53 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 11 18:33:53 server kernel: httpd D ffffffff801538ac 0 20008 5816 20066 19809 (NOTLB) Aug 11 18:33:53 server kernel: ffff81025a299dc8 0000000000000082 ffff81033b4c0740 ffffffff80009a14 Aug 11 18:33:53 server kernel: ffff8101063f8d80 0000000000000009 ffff8100b758f7e0 ffff8101c57187e0 Aug 11 18:33:53 server kernel: 00009436d4100b6c 000000000001d50f ffff8100b758f9c8 000000083b531588 Aug 11 18:33:53 server kernel: Call Trace: Aug 11 18:33:53 server kernel: [<ffffffff80009a14>] __link_path_walk+0x173/0xfb9 Aug 11 18:33:53 server kernel: [<ffffffff8002cc16>] mntput_no_expire+0x19/0x89 Aug 11 18:33:53 server kernel: [<ffffffff80063c4f>] __mutex_lock_slowpath+0x60/0x9b Aug 11 18:33:53 server kernel: [<ffffffff80023908>] __path_lookup_intent_open+0x56/0x97 Aug 11 18:33:53 server kernel: [<ffffffff80063c99>] .text.lock.mutex+0xf/0x14 Aug 11 18:33:53 server kernel: [<ffffffff8001b21f>] open_namei+0xea/0x712 Aug 11 18:33:54 server kernel: [<ffffffff8002768a>] do_filp_open+0x1c/0x38 Aug 11 18:33:54 server kernel: Firewall: *UDP_IN Blocked* IN=eth1 OUT= MAC=ff:ff:ff:ff:ff:ff:00:30:48:9e:6e:99:08:00 SRC=208.43.135.158 DST=255.255.255.255 LEN=151 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=38354 DPT=6112 LEN=131 Aug 11 18:33:54 server kernel: [<ffffffff8001a061>] do_sys_open+0x44/0xbe Aug 11 18:33:54 server kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 I googled a lot trying to find a solution. But it looks that the solution is just to update the kernel or disk driver, thinks that I don't know how to do. In this url http://bugs.centos.org/view.php?id=4515 a lot o people report similar problems, except the fact that they are not related to httpd like mine. According to one member, one solution would be to add "elevator=noop " to /etc/grub.conf like in this example: title CentOS (2.6.18-238.12.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-238.12.1.el5xen ro root=/dev/VolGroup00/LogVol00 elevator=noop initrd /initrd-2.6.18-238.12.1.el5xen.img Would this really solve the problem? My disk are working in RAID. Can this cause some problem to my server? Is there any other solution?

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Building NanoBSD inside a jail

    - by ptomli
    I'm trying to setup a jail to enable building a NanoBSD image. It's actually a jail on top of a NanoBSD install. The problem I have is that I'm unable to mount the md device in order to do the 'build image' part. Is it simply not possible to mount an md device inside a jail, or is there some other knob I need to twiddle? On the host /etc/rc.conf.local jail_enable="YES" jail_mount_enable="YES" jail_list="build" jail_set_hostname_allow="NO" jail_build_hostname="build.vm" jail_build_ip="192.168.0.100" jail_build_rootdir="/mnt/zpool0/jails/build/home" jail_build_devfs_enable="YES" jail_build_devfs_ruleset="devfsrules_jail_build" /etc/devfs.rules [devfsrules_jail_build=5] # nothing Inside the jail [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# sysctl security.jail security.jail.param.cpuset.id: 0 security.jail.param.host.hostid: 0 security.jail.param.host.hostuuid: 64 security.jail.param.host.domainname: 256 security.jail.param.host.hostname: 256 security.jail.param.children.max: 0 security.jail.param.children.cur: 0 security.jail.param.enforce_statfs: 0 security.jail.param.securelevel: 0 security.jail.param.path: 1024 security.jail.param.name: 256 security.jail.param.parent: 0 security.jail.param.jid: 0 security.jail.enforce_statfs: 1 security.jail.mount_allowed: 1 security.jail.chflags_allowed: 1 security.jail.allow_raw_sockets: 0 security.jail.sysvipc_allowed: 0 security.jail.socket_unixiproute_only: 1 security.jail.set_hostname_allowed: 0 security.jail.jail_max_af_ips: 255 security.jail.jailed: 1 [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# mdconfig -l md2 md0 md1 md0 and md1 are the ramdisks of the host. bsdlabel looks sensible [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# bsdlabel /dev/md2s1 # /dev/md2s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 1012016 16 4.2BSD 0 0 0 c: 1012032 0 unused 0 0 # "raw" part, don't edit newfs runs ok [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# newfs -U /dev/md2s1a /dev/md2s1a: 494.1MB (1012016 sectors) block size 16384, fragment size 2048 using 4 cylinder groups of 123.55MB, 7907 blks, 15872 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 253184, 506208, 759232 mount fails [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# mount /dev/md2s1a _.mnt/ mount: /dev/md2s1a : Operation not permitted UPDATE: One of my colleagues pointed out There are some file systems types that can't be securely mounted within a jail no matter what, like UFS, MSDOFS, EXTFS, XFS, REISERFS, NTFS, etc. because the user mounting it has access to raw storage and can corrupt it in a way that it will panic entire system. From http://www.mail-archive.com/[email protected]/msg160389.html So it seems that the standard nanobsd.sh won't run inside a jail while it uses the md device to build the image. One potential solution I'll try is to chroot from the host into the build jail, rather than jexec a shell.

    Read the article

  • Do glue records in non-circular dns-lookups speed up domain resolution or not?

    - by Joe Hopfgartner
    Doing a lookup for my domain on http://www.intodns.com/ I noticed theese two messages: In Parent section: DNS Parent sent Glue The parent nameserver g.gtld-servers.net is not sending out GLUE for every nameservers listed, meaning he is sending out your nameservers host names without sending the A records of those nameservers. It's ok but you have to know that this will require an extra A lookup that can delay a little the connections to your site. This happens a lot if you have nameservers on different TLD (domain.com for example with nameserver ns.domain.org.) and in NS section: Glue for NS records INFO: GLUE was not sent when I asked your nameservers for your NS records.This is ok but you should know that in this case an extra A record lookup is required in order to get the IPs of your NS records. The nameservers without glue are: 109.230.225.96 84.201.40.52 You can fix this for example by adding A records to your nameservers for the zones listed above. I do perfectly understand that the primary objective of glue records is to resolve circular dependencies. The classic use case: my domain is example.com and I want to have the nameserver ns1.example.com. This will never work because i cannot know the ip of ns1.example.com if I don't fetch example.com and in order to do that I need to fetch it from ns1.example.com. To resolve this deadlock I add a glue record to ns1.example.com containing the ip adress of the nameserver, so this can work out. So this problem does not occour if the nameservers are in a different TLD than the domain i want to look up. But however to fetch the zone information from the nameservers I need to know their ip adress right? And in order to know that i need to fetch the zone the nameservers are in from their respective nameservers, right? (or rather my ISP needs to do that in the background) So an extra lookup that takes time? If I now have glue records, I know the IP adress right away without the need to look it up - so this should speed up the resolution of my domain, shouldnt it? However my DNS zone provider (tecserver.at) replied that this would make no sense because "we are not running ns1.ourdomain.com an ns1.ourdomain.com as authorative NS for ourdomain.com. This would be the only sense for glue records. Tecserver has a glue record because the NS for tecserver.at are ns1.tecserver.at and ns2.tecserver.at. Therefore a glue record is needed for resolution.

    Read the article

  • Email server can send internal, but messages never arrive at external recipients

    - by Chase Florell
    I'm running MailEnable on my server, and have been for many years. Recently we had an attack on our server, and I was able to close the hole. Since then, our mail server doesn't seem to be sending mail out. If I send an email from myself to another account hosted on the server, the email arrives as expected. If I send an email from my gmail account to my business account, the email also arrives as expected The problem comes when I send from my business account to an external domain I tried the following Gmail.com Hotmail.com Shaw.ca When I send to any of the above The message leaves my client as expected, The logs appear to accept and forward on the message The SMTP outbound que is empty The message never arrives I have checked our domain with mxtoolbox.com senderbase.org And neither of them are reporting any problems with our domain. I have ensured that port 25 is open (along with the other standard ports) Here is one of the log entries from the SMTP connector 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 220 mx1.example.com ESMTP MailEnable Service, Version: 6.81--6.81 ready at 11/05/13 12:10:00 0 0 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 EHLO EHLO ASSP.nospam 250-mx1.example.com [127.0.0.1], this server offers 6 extensions 159 18 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 EHLO EHLO ASSP.nospam 250-mx1.example.com [127.0.0.1], this server offers 6 extensions 159 18 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH AUTH LOGIN 334 VXNlcm5hbWU6 18 12 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH {blank} 334 UGFzc3dvcmQ6 18 26 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH Y29sb25lbGZhY2U= 235 Authenticated 19 18 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 MAIL MAIL FROM:<[email protected]> 250 Requested mail action okay, completed 43 31 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 RCPT RCPT TO:<[email protected]> 250 Requested mail action okay, completed 43 35 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 DATA DATA 354 Start mail input; end with <CRLF>.<CRLF> 46 6 [email protected] Here are the headers of the sent message X-Assp-Version: 1.7.5.7(1.0.07) on ASSP.nospam X-Assp-ID: ASSP.nospam 78601-04523 X-Assp-Intended-For: [email protected] X-Assp-Envelope-From: [email protected] Received: from [10.10.1.101] ([68.147.245.149] helo=[10.10.1.101]) with IPv4:587 by ASSP.nospam; 5 Nov 2013 12:10:00 -0700 From: Chase Florell <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: 7bit Subject: Test Message Message-Id: <[email protected]> Date: Tue, 5 Nov 2013 12:10:18 -0700 To: Chase Florell <[email protected]> Mime-Version: 1.0 (Mac OS X Mail 7.0 \(1816\)) X-Mailer: Apple Mail (2.1816) . Where else can I check to see if there is something broken? What could cause a problem like this whereby the message appears to send, but never arrives, and never returns a bounce?

    Read the article

  • How Do I Properly Run OfflineIMAP in a Crontab

    - by alharaka
    Installed Fedora. # cat /etc/redhat_release | awk ' { print F "> " $0; print ""; }' Fedora release 14 (Laughlin) Installed offlineimap from yum, cuz I'm lazy these days. # yum info offlineimap | awk ' { print F "> " $0; print ""; }' Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Installed Packages Name : offlineimap Arch : noarch Version : 6.2.0 Release : 2.fc14 Size : 611 k Repo : installed From repo : fedora Summary : Powerful IMAP/Maildir synchronization and reader support URL : http://software.complete.org/offlineimap/ License : GPLv2+ Description : OfflineIMAP is a tool to simplify your e-mail reading. With : OfflineIMAP, you can read the same mailbox from multiple : computers. You get a current copy of your messages on each : computer, and changes you make one place will be visible on all : other systems. For instance, you can delete a message on your home : computer, and it will appear deleted on your work computer as : well. OfflineIMAP is also useful if you want to use a mail reader : that does not have IMAP support, has poor IMAP support, or does : not provide disconnected operation. And, lo and behold, every time I run offlineimap and try to redirect output in a crontab, it does not work. Below is my .offlineimaprc. [general] ui = TTY.TTYUI accounts = Personal, Work maxsyncaccounts = 3 [Account Personal] localrepository = Local.Personal remoterepository = Remote.Personal [Account Work] localrepository = Local.Work remoterepository = Remote.Work [Repository Local.Personal] type = Maildir localfolders = ~/mail/gmail [Repository Local.Work] type = Maildir localfolders = ~/mail/companymail [Repository Remote.Personal] type = IMAP remotehost = imap.gmail.com remoteuser = [email protected] remotepass = password ssl = yes maxconnections = 4 # Otherwise "deleting" a message will just remove any labels and # retain the message in the All Mail folder. realdelete = no [Repository Remote.Work] type = IMAP remotehost = server.company.tld remoteuser = username remotepass = password ssl = yes maxconnections = 4 I have tried TTY.TTYUI, NonInteractive.Quiet and NonInteractive.Basic with different variations. With or without redirection, the crontab entries I try cause problems. $ crontab -l | awk ' { print F "> " $0; print ""; }' */5 * * * * offlineimap >> ~/mail/logs/offlineimap.log 2>&1 */5 * * * * offlineimap I always get the same damn error ERROR: No UIs were found usable!. What am I doing wrong!?

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" stefanl@example.org The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • Postfix: Relay access denied

    - by Joseph Silvashy
    When I telnet to my server thats running postfix and try to send an email: MAIL FROM:<[email protected]> #=> 250 2.1.0 Ok RCPT TO:<[email protected]> #=> 554 5.7.1 <[email protected]>: Relay access denied I couldn't really find the answer on the site or by looking at other users question/answers, I'm not sure where to start. Ideas? Update So basically looking at the docs: http://www.postfix.org/SMTPD_ACCESS_README.html (section: Getting selective with SMTP access restriction lists), I don't seem to have any of those directives in etc/postfix/main.cf like smtpd_client_restrictions = permit_mynetworks, reject or any of the other ones, so I'm quite confused. But really I'm going to have a rails app connect to the server and send the emails, so I'm not sure how to handle it. Here is what my config file looks like: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = rerecipe-utils alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost, mail.rerecipe.com, rerecipe.com relayhost = mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all mynetworks = 127.0.0.0/8 204.232.207.0/24 10.177.64.0/19 [::1]/128 [fe80::%eth0]/64 [fe80::%eth1]/64 Something to note is that relayhost is blank, this is the default configuration file that was created when I installed Postfix, when testing to connect with openssl I get this: ~% openssl s_client -connect mail.myhostname.com:25 -starttls smtp CONNECTED(00000003) depth=0 /CN=myhostname verify error:num=18:self signed certificate verify return:1 depth=0 /CN=myhostname verify return:1 --- Certificate chain 0 s:/CN=myhostname i:/CN=myhostname --- Server certificate -----BEGIN CERTIFICATE----- MIIBqTCCARICCQDDxVr+420qvjANBgkqhkiG9w0BAQUFADAZMRcwFQYDVQQDEw5y ZXJlY2lwZS11dGlsczAeFw0xMDEwMTMwNjU1MTVaFw0yMDEwMTAwNjU1MTVaMBkx FzAVBgNVBAMTDnJlcmVjaXBlLXV0aWxzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQDODh2w4A1k0qiPNPhkrPj8sfkxpKPTk28AuZhgOEBYBLeHacTKNH0jXxPv P3TyhINijvvdDPzyuPJoTTliR2EHR/nL4DLhr5FzhV+PB4PsIFUER7arx+1sMjz6 5l/Ubu1ppMzW9U0IFNbaPm2AiiGBQRCQN8L0bLUjzVzwoSRMOQIDAQABMA0GCSqG SIb3DQEBBQUAA4GBALi2vvk9TGKJubXYJbU0PKmVmsfzFK35yLqr0keiDBhK2Leg 274sWxEH3ds8mUaRftuFlXb7RYAGNlVyTuMTY3CEcnqIsH7F2McCUTpjMzu/o1mZ O/B21CelKetBd1u79Gkrv2vWyN7Csft6uTx5NIGG2+pGi3r0gX2r0Hbu2K94 -----END CERTIFICATE----- subject=/CN=myhostname issuer=/CN=myhostname --- No client certificate CA names sent --- SSL handshake has read 1203 bytes and written 360 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 1AA4B8BFAAA85DA9ED4755194C50311670E57C35B8C51F9C2749936DA11918E4 Session-ID-ctx: Master-Key: 9B432F1DE9F3580DCC6208C76F96631DC5A4BC517BDBADD5F514414DCF34AC526C30687B96C5C4742E9583555A118232 Key-Arg : None Start Time: 1292985376 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN Oddly enough when I try to send an email from the machine itself it does work: echo test | mail -s "test subject" [email protected]

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • Ubuntu Server 12.04 CPU Load

    - by zertux
    I have a Server (2x Hexa-Core Xeon E5649 2.53GHz w/HT with 32GB RAM and 20000 GB Bandwidth) running Ubuntu Server 12.04 LTS. The server runs LAMP and serves one website only, the estimated number of users is to be ~ 15,000 at the same time. At the moment i have around 2000 users online each of them runs 50 MySQL queries (small values mostly select and insert) from the beginning until the end of the session. Server CPU Load is high at this number of connections while the RAM usage is almost 1GB out of 32GB its worth mentioning that the server was running very fast with no problems at all but am concerned about the load average. http://s12.postimage.org/z7hi6mz3h/photo.png top - 03:02:43 up 9 min, 2 users, load average: 50.83, 30.14, 12.83 Tasks: 432 total, 1 running, 430 sleeping, 0 stopped, 1 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 66.5%id, 33.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32939992k total, 3111604k used, 29828388k free, 84108k buffers Swap: 2048280k total, 0k used, 2048280k free, 1621640k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2860 root 20 0 25820 2288 1420 S 3 0.0 0:11.18 htop 1182 root 20 0 0 0 0 D 2 0.0 0:01.46 kjournald 1935 mysql 20 0 12.3g 161m 7924 S 1 0.5 102:31.45 mysqld 11 root 20 0 0 0 0 S 0 0.0 0:00.38 kworker/0:1 1822 www-data 20 0 247m 25m 4188 D 0 0.1 0:01.81 apache2 2920 www-data 20 0 0 0 0 Z 0 0.0 0:01.20 apache2 <defunct> 2942 www-data 20 0 247m 23m 3056 D 0 0.1 0:00.20 apache2 3516 www-data 20 0 247m 23m 3028 D 0 0.1 0:00.06 apache2 3521 www-data 20 0 247m 23m 3020 D 0 0.1 0:00.09 apache2 3664 www-data 20 0 247m 23m 3132 D 0 0.1 0:00.09 apache2 3674 www-data 20 0 247m 23m 3252 D 0 0.1 0:00.06 apache2 3713 www-data 20 0 247m 23m 3040 D 0 0.1 0:00.09 apache2 1 root 20 0 24328 2284 1344 S 0 0.0 0:03.09 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.01 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 9 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/1:0 root@server:~/codes# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 19 0 0 29684012 86112 1689844 0 0 19 590 254 231 48 0 47 5 23 0 0 29704812 86128 1697672 0 0 4 320 11100 8121 77 1 22 0 33 0 0 29671044 86156 1705308 0 0 0 5440 13190 9140 95 1 4 0 33 3 0 29670088 86160 1706288 0 0 0 32932 12275 7297 99 0 1 0 35 0 0 29693456 86188 1710724 0 0 4 676 12701 7867 98 1 1 0 ^C I have not changed any of the default configurations that comes with Ubuntu. Is this load normal for such powerful server ? is there any optimization i can make to Apache/MySQL to minimize the load ? What do you recommend ?

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • Netcat file transfer problem

    - by thepurplepixel
    I have two custom scripts I just wrote to facilitate transferring files between my VPS and my home server. They are both written in bash (short & sweet): To send: #!/bin/bash SENDFILE=$1 PORT=$2 HOST='<my house>' HOSTIP=`host $HOST | grep "has address" | cut --delimiter=" " -f 4` echo Transferring file \"$SENDFILE\" to $HOST \($HOSTIP\). tar -c "$SENDFILE" | pv -c -N tar -i 0.5 | lzma -z -c -6 | pv -c -N lzma -i 0.5 | nc -q 1 $HOSTIP $PORT echo Done. To receive: #!/bin/bash SERVER='<myserver>' SERVERIP=`host $SERVER | grep "has address" | cut --delimiter=" " -f 4` PORT=$1 echo Receiving file from $SERVER \($SERVERIP\) on port $PORT. nc -l $PORT | pv -c -N netcat -i 0.5 | lzma -d -c | pv -c -N lzma -i 0.5 | tar -xf - echo Done. The problem is that, for a very quick second, I see something flash along the lines of "Connection Refused" (before pv overwrites it), and no file is ever transferred. The port is forwarded through my router, and nmap confirms it: ~$ sudo nmap -sU -PN -p55515 -v <my house> Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-21 18:10 EDT NSE: Loaded 0 scripts for scanning. Initiating Parallel DNS resolution of 1 host. at 18:10 Completed Parallel DNS resolution of 1 host. at 18:10, 0.00s elapsed Initiating UDP Scan at 18:10 Scanning 74.13.25.94 [1 port] Completed UDP Scan at 18:10, 2.02s elapsed (1 total ports) Host 74.13.25.94 is up. Interesting ports on 74.13.25.94: PORT STATE SERVICE 55515/udp open|filtered unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds Raw packets sent: 2 (56B) | Rcvd: 5 (260B) Also, running netcat normally doesn't work either: squircle@summit:~$ netcat <my house> 55515 <my house> [<my IP>] 55515 (?) : Connection refused Both boxes are Ubuntu Karmic (9.10). The receiver has no firewall, and outbound traffic on that port is allowed on the sender. I have no idea what to troubleshoot next. Any ideas? P.S.: Feel free to move this to SO/SF if you feel it would fit better there.

    Read the article

  • How do I configure a site in IIS 7 for SSL with a wildcard certificate?

    - by michielvoo
    We have an Windows 2008 server with IIS 7 to test sites we develop for our clients. Each site has a binding on a subdomain: clienta.example.com clientb.example.com clientc.example.com (* Using example.com to protect the innocent) For one of these sites we now have to test if it works over https. So I have created a wildcard certificate request with *.example.com as the common name. I have received the certificate (issued by PositiveSSL SA) and completed the request. The certificate is now installed in IIS. Now I have added an https binding to the second site with the following settings: type: https IP address: All Unassigned Port: 443 Host name: clientb.example.com SSL certificate: *.example.com Browsing the site over regular http works fine. When I try to browse the site over https I get the following errors (depending on the browser used): Chrome This webpage is not available Error 102 (net::ERR_CONNECTION_REFUSED): Unknown error. Firefox Unable to connect Firefox can't establish a connection to the server at clientb.example.com Firebug says Status: Aborted Internet Explorer Internet Explorer cannot display the webpage I have checked Failed Request Tracing, and according to the log the request was completed with status 200. I have run the SSL Diagnostics Tool with the following result: System time: Fri, 04 Mar 2011 14:04:35 GMT Connecting to 192.168.2.95:443 Connected Handshake: 115 bytes sent Handshake: 3877 bytes received Handshake: 326 bytes sent Handshake: 59 bytes received Handshake succeeded Verifying server certificate, it might take a while... Server certificate name: *.example.com Server certificate subject: OU=Domain Control Validated, OU=PositiveSSL Wildcard, CN=*.example.com Server certificate issuer: C=GB, S=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=PositiveSSL CA Server certificate validity: From 2-3-2011 1:00:00 To 2-3-2012 0:59:59 1:00:00 To 2-3-2012 0:59:59 HTTPS request: GET / HTTP/1.0 User-Agent: SSLDiag Accept:*/* HTTPS: 85 bytes of encrypted data sent HTTPS: 533 bytes of encrypted data received Status: HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found Content-Type: text/html; charset=us-ascii Server: Microsoft-HTTPAPI/2.0 Date: Fri, 04 Mar 2011 14:04:35 GMT Connection: close Content-Length: 315 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"> <HTML><HEAD><TITLE>Not Found</TITLE> <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD> <BODY><h2>Not Found</h2> <hr><p>HTTP Error 404. The requested resource is not found.</p> </BODY></HTML> HTTPS: server disconnected Final handshake: 37 bytes sent successfully Q: What can I do to make this work?

    Read the article

  • how does openvpn decide which interface to get IP addrs from

    - by bkrupa
    Using ubuntu 10.04 on both ends. We have a client and server machine on the SAME network attempting to make a vpn connection. We use the config files from here and made minimal changes. The server and client start and seem to connect without any trouble. The server looks like: Wed Feb 23 22:13:22 2011 MULTI: multi_create_instance called Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Re-using SSL/TLS context Wed Feb 23 22:13:22 2011 192.168.1.55:47166 LZO compression initialized Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel MTU parms [ L:1574 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Local Options hash (VER=V4): 'f7df56b8' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Expected Remote Options hash (VER=V4): 'd79ca330' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 TLS: Initial packet from 192.168.1.55:47166, sid=69112e42 5458135b *...* Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Feb 23 22:13:22 2011 192.168.1.55:47166 [client1] Peer Connection Initiated with 192.168.1.55:47166 On the client side the connection looks like: Wed Feb 23 22:20:07 2011 [server] Peer Connection Initiated with [AF_INET]192.168.1.41:1194 Wed Feb 23 22:20:10 2011 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Wed Feb 23 22:20:10 2011 PUSH: Received control message: 'PUSH_REPLY,route-gateway 10.8.0.4,ping 10,ping-restart 120,ifconfig 10.8.0.50 255.255.255.0' ... Wed Feb 23 22:20:10 2011 /sbin/ifconfig tap0 10.8.0.50 netmask 255.255.255.0 mtu 1500 broadcast 10.8.0.255 Wed Feb 23 22:20:10 2011 Initialization Sequence Completed The openvpn server has been configured to assign ip addresses in the range 10.8.0.* and the client has been given 10.8.0.50. When I run the following nmap from the client: Starting Nmap 5.00 ( http://nmap.org ) at 2011-02-23 22:04 EST Host 10.8.0.50 is up (0.00047s latency). Nmap done: 256 IP addresses (1 host up) scanned in 30.34 seconds Host 192.168.1.1 is up (0.0025s latency). Host 192.168.1.18 is up (0.074s latency). Host 192.168.1.41 is up (0.0024s latency). Host 192.168.1.55 is up (0.00018s latency). Nmap done: 256 IP addresses (4 hosts up) scanned in 6.33 seconds If I run an nmap from the server on 10.8.0.* I get nothing. If the client has two interfaces (wireless and tap device) when you look for a certain ip address, how does it decide which interface to connect on?

    Read the article

  • FreeBSD jail with IPFW with loopback - unable to connect loopback interface

    - by khinester
    I am trying to configure a one IP jail with loopback interface, but I am unsure how to configure the IPFW rules to allow traffic to pass between the jail and the network card on the server. I have followed http://blog.burghardt.pl/2009/01/multiple-freebsd-jails-sharing-one-ip-address/ and https://forums.freebsd.org/viewtopic.php?&t=30063 but without success, here is what i have in my ipfw.rules # vim /usr/local/etc/ipfw.rules ext_if="igb0" jail_if="lo666" IP_PUB="192.168.0.2" IP_JAIL_WWW="10.6.6.6" NET_JAIL="10.6.6.0/24" IPF="ipfw -q add" ipfw -q -f flush #loopback $IPF 10 allow all from any to any via lo0 $IPF 20 deny all from any to 127.0.0.0/8 $IPF 30 deny all from 127.0.0.0/8 to any $IPF 40 deny tcp from any to any frag # statefull $IPF 50 check-state $IPF 60 allow tcp from any to any established $IPF 70 allow all from any to any out keep-state $IPF 80 allow icmp from any to any # open port ftp (20,21), ssh (22), mail (25) # ssh (22), , dns (53) etc $IPF 120 allow tcp from any to any 21 out $IPF 130 allow tcp from any to any 22 in $IPF 140 allow tcp from any to any 22 out $IPF 150 allow tcp from any to any 25 in $IPF 160 allow tcp from any to any 25 out $IPF 170 allow udp from any to any 53 in $IPF 175 allow tcp from any to any 53 in $IPF 180 allow udp from any to any 53 out $IPF 185 allow tcp from any to any 53 out # HTTP $IPF 300 skipto 63000 tcp from any to me http,https setup keep-state $IPF 300 skipto 63000 tcp from any to me http,https setup keep-state # deny and log everything $IPF 500 deny log all from any to any # NAT $IPF 63000 divert natd ip from any to any via $jail_if out $IPF 63000 divert natd ip from any to any via $jail_if in but when i create a jail as: # ezjail-admin create -f continental -c zfs node 10.6.6.7 /usr/jails/node/. /usr/jails/node/./etc /usr/jails/node/./etc/resolv.conf /usr/jails/node/./etc/ezjail.flavour.continental /usr/jails/node/./etc/rc.d /usr/jails/node/./etc/rc.conf 4 blocks find: /usr/jails/node/pkg/: No such file or directory Warning: IP 10.6.6.7 not configured on a local interface. Warning: Some services already seem to be listening on all IP, (including 10.6.6.7) This may cause some confusion, here they are: root syslogd 1203 6 udp6 *:514 *:* root syslogd 1203 7 udp4 *:514 *:* i get these warning and then when i go into the jail environment, i am unable to install any ports. any advice much appreciated.

    Read the article

  • DELL DRAC & Ubuntu VPN Connection

    - by Mikunos
    I am trying to connect to a DELL DRAC card without success by Ubuntu VPN Connection Manager. I have these data: Protocol: PPTP SERVER IP PPTP: 1233.123.123.123 DRAC IP: 192.168.10.25 Subnet: 255.255.0.0 User: myuser Pass: mypass where have I to write these parameters? I have configured the PPTP connection using the graphical tool in Ubuntu 11.10 ... but in the /var/log/syslog I get these messages: Apr 15 11:33:15 shinet NetworkManager[1035]: <info> Starting VPN service 'pptp'... Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 18180 Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN service 'pptp' appeared; activating connections Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN plugin state changed: 3 Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN connection 'Connessione VPN 1' (Connect) reply received. Apr 15 11:33:15 shinet pppd[18182]: Plugin /usr/lib/pppd/2.4.5/nm-pptp-pppd-plugin.so loaded. Apr 15 11:33:15 shinet pppd[18182]: pppd 2.4.5 started by root, uid 0 Apr 15 11:33:15 shinet pppd[18182]: Using interface ppp0 Apr 15 11:33:15 shinet pppd[18182]: Connect: ppp0 <--> /dev/pts/1 Apr 15 11:33:15 shinet NetworkManager[1035]: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Apr 15 11:33:15 shinet NetworkManager[1035]: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. Apr 15 11:33:15 shinet pptp[18185]: nm-pptp-service-18180 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Apr 15 11:33:46 shinet pppd[18182]: LCP: timeout sending Config-Requests Apr 15 11:33:46 shinet pppd[18182]: Connection terminated. Apr 15 11:33:46 shinet avahi-daemon[1081]: Withdrawing workstation service for ppp0. Apr 15 11:33:46 shinet NetworkManager[1035]: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Apr 15 11:33:46 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:46 shinet pppd[18182]: Modem hangup Apr 15 11:33:46 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:51 shinet pppd[18182]: Exit. Apr 15 11:33:51 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:51 shinet NetworkManager[1035]: <info> VPN plugin state changed: 6 Apr 15 11:33:51 shinet NetworkManager[1035]: <info> VPN plugin state change reason: 0 Apr 15 11:33:51 shinet NetworkManager[1035]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active. Apr 15 11:33:51 shinet NetworkManager[1035]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Apr 15 11:33:57 shinet NetworkManager[1035]: <info> VPN service 'pptp' disappeared Thanks

    Read the article

  • How to start nginx via different port(other than 80)

    - by Zhao Peng
    Hi I am a newbie on nginx, I tried to set it up on my server(running Ubuntu 4), which already has apache running. So after I apt-get install it, I tried to start nginx. Then I get the message like this: Starting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) That makes sense as Apache is using port 80. Then I tried to modify nginx.conf, I reference some articles, so I changed it like so: server { listen 8080; location / { proxy_pass http://94.143.9.34:9500; proxy_set_header Host $host:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Via "nginx"; } After saving this and try to start nginx again, I still get the same error as previously. I cannot really find a related post about this, could any good people shred some light? Thanks in advance :) ========================================================================= I should post all the content in conf here: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 81; location / { proxy_pass http://94.143.9.34:9500; proxy_set_header Host $host:81; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Via "nginx"; } } } mail { See sample authentication script at: http://wiki.nginx.org/NginxImapAuthenticateWithApachePhpScript auth_http localhost/auth.php; pop3_capabilities "TOP" "USER"; imap_capabilities "IMAP4rev1" "UIDPLUS"; server { listen localhost:110; protocol pop3; proxy on; } server { listen localhost:143; protocol imap; proxy on; } } Basically, I changed nothing except adding the server part.

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >