Search Results

Search found 1598 results on 64 pages for 'warnings'.

Page 44/64 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Postfix: Using google apps for stmp errors

    - by Zed Said
    I am using postfix and need to send the mail using google apps smtp. I am getting errors after I thought I had set everything up correctly: May 11 09:50:57 zedsaid postfix/error[22214]: 00E009693FB: to=<[email protected]>, relay=none, delay=2466, delays=2462/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22213]: 0ACB36D1B94: to=<[email protected]>, relay=none, delay=2486, delays=2482/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22232]: 067379693D3: to=<[email protected]>, relay=none, delay=2421, delays=2417/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = zedsaid.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = #relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all delay_warning_time = 4h smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 ## Gmail Relay relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_mechanism_filter = login smtp_tls_eccert_file = smtp_tls_eckey_file = smtp_use_tls = yes smtp_enforce_tls = no smtp_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_received_header = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport debug_peer_list = smtp.gmail.com debug_peer_level = 3 What am I doing wrong?

    Read the article

  • Trying to setup catchall mail forwarding on my rackspace's cloudspace server

    - by bob_cobb
    I'm running Ubuntu 12 Precise Pangolin and am trying to configure my server to catchall mail sent to it and forward it to my gmail address. I've been trying lots of examples online like editing my main.cf file which looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = destiny alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = destiny, localhost.localdomain, localhost relayhost = smtp.sendgrid.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 51200000 recipient_delimiter = + inet_interfaces = all inet_protocols = all In my /etc/postfix/virtual I have: @mydomain.com [email protected] @myotherdomain.com [email protected] Which isn't working when I email [email protected] or [email protected]. So I got the recommendation to add the following to my /etc/alias: postmaster:root root:[email protected] restarted postfix, and tried emailing [email protected] or [email protected] but it still won't send. Does anyone have any idea what I'm doing wrong here? I'd appreciate any help.

    Read the article

  • Windows Server 2008 (Web Server) Replication

    - by justjoshingyou
    We have a load balanced environment with Windows Server 2008. What are some best practices to setting up replication across the web servers? Do I only want to replicate the web folders? How about replicating IIS changes - or do I need to make IIS changes on every server? I've never, ever set up replication, but I have worked with a web farm that used it before. Basically, I only know the basics about how it works, and am looking for any advice, guides, warnings, etc on setting this up. If you'd like to offer any advice, I'll let you know how our environment is for now. We have 1 prod server up and the second is nearly ready to go. We are using a cloud system and all machines are VM's. I am in the process of setting up the domain controller now (as I need to have one for DFS). Any ideas on the best way to go about setting up replication? Should we just stick the prod server in from the start or set up using a test VM and our second server and then switch it up later? I do not want to risk overwriting our prod server. Thanks!

    Read the article

  • Microsoft SQL Server Management Studio causing system freeze

    - by CRoshanLG
    I'm experiencing very slow response from MSSMS and it causes other applications to slow down. Specially Skype crashes after few seconds from opening MSSMS, showing an error called "Disk I/O Error". I'm regularly using few applications (Sublime text, MS Word, Firefox, Outlook, Skype and one or two other apps) simultaneously. The system works fine when MSSMS is not in use! But as soon as MSSMS is opened, all the apps start to freeze (MSSMS also responds very slow). This problem has been there for about a week now (I haven't installed any apps or haven't made any changes to the system during that time). -- System Specifications -- Processor: Core i3 (3.1 GHz) RAM: 4 GB OS: Windows 7 Professional (64 bit) Free space in C drive: ~ 100 GB MS SQL Server 2008 R2 Microsoft SQL Server Management Studio version - 10.50.1600.1 I've tried to find a reason for this but there are no helpful information in the web! There are some solutions suggested (in forums and in Skype Support pages) for the Skypes' "Disk I/O Error", all of which I tried but does not solve the problem. Has anyone faced the same senario? (and hopefully) knows a solution? Systems Log I don't have much knowledge in interpreting the System Log, but I think the Critical and Warnings are not helpful. But there are lots of Error logs which might be useful. In source Kernal-General there are few similar errors saying "An I/O operation initiated by the Registry failed unrecoverably.The Registry could not flush hive (file): <some file>" In source atapi also there are few similar errors -- "The driver detected a controller error on \Device\Ide\IdePort0." (all errors has occurred in 'IdePort0') In Application Error, there are several errors logged, and following is the latest one. Both the Errors which has occurred today is similar (to this one). As it is from Ssms.exe, I guess this is relevant to the cause of problem. But as I said above I can't understand what it means!

    Read the article

  • Can Subject Alternative Name accommodate multiple virtual mail domains?

    - by Lawrence
    I am currently running a postfix server with self signed certificates serving one mail domain, mycompany.com, the mail server is mail.mycompany.com and so is the CN of the certificate. Now, I need to add a new domain to it. The new domain name is mycompany.net to the same server. Since the users already have the root of the old certificate, I'd like to reuse that. However, I'd like to issue a new certificate so users using the SMTP from Outlook/Thunderbird of mail.mycompany.net do not get warnings. If I understand correctly, if I issue a new certificate with CN=mail.mycompany.com and a subjectAltName=DNS:mail.mydomain.net and have postfix serve this, the client will not complain either way about the cn not matching the target host name. Am I correct in this assumption or am I misunderstanding the concept of Subject Alternative Name? Just to avoid conversation, I do not want to have users on mycompany.net addresses use the mycompany.com server because I might (not a technical issue) have to split up into two different locations, and I want to produce an easily migrateable setup.

    Read the article

  • Troubleshooting a Windows 7 PC that wouldn't sleep

    - by NPE
    I have a new Windows 7 PC that wouldn't sleep (not just automatically, but also when specifically told to). The screen goes black momentarily, but within two seconds the machine comes back as if nothing has happened. I tried powercfg energy. This produces some errors quoted at the bottom of this post, plus some warnings about timer resolution. There are no USB devices connected other than wireless keyboard + mouse (Logitech MK250); I tried unplugging them to no effect. The motherboard is Asus P7P55D-E. powercfg lastwake says "Wake History Count - 0", which I take to mean that it never actually went to sleep. I dual boot into Ubuntu, and was having exactly the same problem on the Linux side. That turned out to do with USB 3.0, which I've now disabled in the BIOS. This has solved the problem on the Ubuntu side of things, but made no difference to Windows 7. Any suggestions? Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name Generic USB Hub Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_8087&PID_0020 Port Path 1 USB Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name USB Root Hub Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_8086&PID_3B34 Port Path USB Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name USB Composite Device Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_046D&PID_C52E Port Path 1,8

    Read the article

  • Setting up self signed cert and CA [plesk / linux]

    - by microchasm
    I'm about ready to give up and do a clean wipe of this machine and start over with ISPConfig or some other variant. I installed Plesk on this machine to help with some of the handiwork. It is the free version (single domain); I don't need it for much. It's nice, though, to use to set up db's email, etc. Anyway, I would like to set it up as a CA (which I can add to users' trusted root servers to alleviate those warnings). It seems like Plesk does all it can to obfuscate where things are. Despite trying to find the conf files, and crt/pem/key etc. I am (5 hours later) now left with a machine that won't even get to the ssl page. The browser will sit there, until a 'connection reset' error comes up. In error_log, I get messages saying CN doesn't match server name -- which it does. ssl_error_log: [Thu May 13 16:02:14 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu May 13 16:12:19 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) not very helpful. If anyone has any experience, and/or recommendations (including other software), I'd be much obliged. NB RHEL5; 1 domain, 3 subdomains; everything local only. Thanks.

    Read the article

  • Windows XP: How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because this (empty) directory is still undeleteable: C:\1\2\Favorites\Wien\What To Do.. I'm guessing because of the ".." at the end, Windows (Explorer and cmd) can't deal with it: Here is what Explorer properties shows: Any ideas?

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • Winamp playing sound but no video

    - by Greg Sansom
    I am having problems playing video in Winamp (the movie I am trying to play is an AVI - not sure if other formats work). I have installed the K-Lite Codec Pack, and the video does work in Winamp Classic. I can also play the video in Winamp on another machine (although I can't remember the exact configuration details of that machine - and I don't think they're relevant). There are a few symptoms: The content of the Video view is either empty, transparent, or displays rendering from other programs. Opening the Visualization view shows the following error: MILKDROP ERROR DirectX initialization failed (GetDeviceCaps). This means that no valid 3D-accelerated display adapter could be found on your computer. If you know this is not the case, it is possible that your graphics subsystem is unstable; please try rebooting your computer and then try to run the plugin again. Otherwise, please install a 3D-accelerated display adapter. Trying to open streams via the SHOUTCast TV plugin shows Error opening video output, and the video does not open. Opening the file with WMC causes the following error (although the movie still plays): Error creating DX9 allocation presenter CreateDevice failed D3DERR_NOTAVAILABLE There are no warnings displayed in Device Manager, although the display adapter is the standard Windows one. Running DxDiag shows no problems (codec for Video listed as XviD 1.1.2 Final). GSpot reports that codecs are installed. System specs: - Windows Server 2008 r2 Standard 64-bit, with latest updates; - .NET 3.5.1 installed; - Winamp v5.6.01 (latest version); - DirectX 11 (Latest version); - K-Lite Codec Pack 7.0.0 (Full); - Machine is HP DC7600 - full specs here. Please comment if there is any more information which will help to diagnose the problem.

    Read the article

  • Thecus N5200, disk has dropped out of RAID5

    - by Anders Ekdahl
    We have a Thecus 5200 NAS here at work with five WD Caviar Black 2TB disks in a RADI5 array. Yesterday, disk 4 dropped out of the array, and in the NAS web interface there's a warning about the RAID array being "degraded". When I go into Storage - Disks, disk 1 and 4 has a warning next to them. When I click on the warnings, this information about the disks are displayed: Tray Number 4 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 34 Reallocated Sector Count 66 Current Pending Sector 1447 Raw Read Error Rate 61 Seek Error Rate 0 Hardware ECC Recovered N/A Tray Number 1 Model WD2001FASS-00W2B Power On Hours 2403 Hours Temperature Celsius 32 Reallocated Sector Count 0 Current Pending Sector 1465 Raw Read Error Rate 0 Seek Error Rate 0 Hardware ECC Recovered N/A I'm not really an expert on either disks or RAID arrays. Does this indicate that the fourth disk is damaged, and needs to be replaced? And what about disk number one? It has a warning, but it's still in the array. Is it safe to add the fourth disk back into the array as a spare? I can't find any way to add it back as a it were before.

    Read the article

  • NAS is intermittently inaccessible

    - by Natalie
    Model: QNAP TS-410 Turbo NAS Firmware version: 3.2.5 Build 0409T Issue: Each day, users connect to share folders on the NAS system and have read/write permissions for the share folders to which they need access. However, it often asks them for their log-in details and - when provided with right (or wrong) credentials for a user with read/write permissions - it denies them access. I've checked the logs and I keep seeing the following warnings: 2011-11-23 16:26:29 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:26:16 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:25:30 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:25:15 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:24:33 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:24:21 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:23:37 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:23:25 System 127.0.0.1 localhost Re-launch process [proftpd]. They seem to occur per minute but I am uncertain about whether or not they are relevant to this issue. The "Login failed" warning has also displayed in the system connection logs which tells me when and which user was unable to log in, as shown below: 2011-11-22 16:11:07 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 16:11:07 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 16:11:06 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 13:46:14 administrator 192.168.0.yy --- HTTP Administration Login Fail 2011-11-22 13:46:09 administrator 192.168.0.yy --- HTTP Administration Login Fail 2011-11-21 15:17:22 user 192.168.0.zz computer-02 SAMBA --- Login Fail 2011-11-21 15:17:18 user 192.168.0.zz computer-02 SAMBA --- Login Fail 2011-11-21 15:17:17 user 192.168.0.zz computer-02 SAMBA --- Login Fail I've researched this on Google and the QNAP forums and have not come up with a resolution as yet.

    Read the article

  • Set proper rights for sshfs mountpoint so it can be shared with samba

    - by CS01
    I have a domain hoster that provides access via SSH. My platforms are: Gentoo 2.6.36-r5 Windows (XP/Vista/7) I work on my Windows, I use Gentoo to do all the magic Windows can't do. Therefore I use sshfs to mount the remote public directory for my domain to /mnt/mydomain.com. Authentication is done via keys, so lazy me don't have to type in my password every now and then. Since I do my coding on Windows, and I don't want to upload/download the changed files all the time, I want to access this /mnt/mydomain.com via a samba share. So I shared /mnt in samba, all mounts except mydomain.com is listed on my Windows Explorer. My theories are: sshfs does not set the mountpoint uid/gid to something that samba expects samba does not know that it has to include the uid/gid that /mnt/mydomain.com has been set. All above is wrong, and I don't know. Here are configs and output from console, need anything else just let me know. Also no errors or warnings that I take notice of being relevant to this issue, but I might be wrong. gentoo ~ # ls -lah /mnt total 20K drwxr-xr-x 9 root root 4.0K Mar 26 16:15 . drwxr-xr-x 18 root root 4.0K Mar 26 2011 .. -rw-r--r-- 1 root root 0 Feb 1 16:12 .keep drwxr-xr-x 1 root root 0 Mar 18 12:09 buffer drwxr-s--x 1 68591 68591 4.0K Feb 16 15:43 mydomain.com drwx------ 2 root root 4.0K Feb 1 16:12 cdrom drwx------ 2 root root 4.0K Feb 1 16:12 floppy drwxr-xr-x 1 root root 0 Sep 1 2009 services drwxr-xr-x 1 root root 0 Feb 10 15:08 www /etc/samba/smb.conf [mnt] comment = Mount points writable = yes writeable = yes browseable = yes browsable = yes path = /mnt /etc/fstab sshfs#[email protected]:/home/to/pub/dir/ /mnt/mydomain.com/ fuse comment=sshfs,noauto,users,exec,uid=0,gid=0,allow_other,reconnect,follow_symlinks,transform_symlinks,idmap=none,SSHOPT=HostBasedAuthentication 0 0 For an easier read: [email protected] /home/to/pub/dir/ /mnt/mydomain.com/ options: comment=sshfs noauto users exec uid=0 gid=0 allow_other reconnect follow_symlinks transform_symlinks idmap=none SSHOPT=HostBasedAuthentication Help!

    Read the article

  • ubuntu 10.04 + php + postfix

    - by mononym
    I have a server I am running: Ubuntu 10.04 php 5.3.5 (fpm) Nginx I have installed postfix, and set it to loopback-only (only need to send) The problem is it is not sending. if i issue (at command line): echo "testing local delivery" | mail -s "test email to localhost" [email protected] I get the email no problem, but through PHP it does not arrive. When I send it via PHP, mail.log shows: Mar 28 10:15:04 host postfix/pickup[32102]: 435EF580D7: uid=0 from=<root> Mar 28 10:15:04 host postfix/cleanup[32229]: 435EF580D7: message-id=<20120328091504.435EF580D7@FQDN> Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: from=<root@FQDN>, size=1127, nrcpt=1 (queue active) Mar 28 10:15:04 host postfix/local[32230]: 435EF580D7: to=<root@FQDN>, orig_to=<root>, relay=local, delay=3.1, delays=3/0.01/0/0.09, dsn=2.0.0, status=sent (delivered to maildir) Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: removed any help appreciated, my main.cf file: smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = FQDN alias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliases myorigin = /etc/mailname #myorigin = $mydomain mydestination = FQDN, localhost.FQDN, , localhost relayhost = $mydomain mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only virtual_alias_maps = hash:/etc/postfix/virtual home_mailbox = mail/

    Read the article

  • Configuring SASL support in libmemcached

    - by John Keyes
    I'm trying to build libmemcached with SASL support on OS X Mountain Lion. I have built memcached (1.4.15) with SASL support: $ memcached -S -vv Initialized SASL. slab class 1: chunk size 96 perslab 10922 ... slab class 42: chunk size 1048576 perslab 1 <17 server listening (binary) <18 server listening (binary) <19 send buffer was 9216, now 3728270 <20 send buffer was 9216, now 3728270 <19 server listening (udp) <20 server listening (udp) ... I am trying to build libmemcached with SASL support too. I have tried the following: $ ./configure --prefix=/usr/local \ --with-memcached-sasl=/usr/local/bin/memcached ... $ ./configure --prefix=/usr/local \ --with-memcached-sasl="/usr/local/bin/memcached -S" ... But the resulting configuration summary is the same for both: Configuration summary for libmemcached version 1.0.11 * Installation prefix: /usr/local * System type: apple-darwin12.2.0 * Host CPU: x86_64 * C Compiler: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C Flags: -O2 -Werror -Wall -Wextra -std=c99 -Wbad-function-cast -Wmissing-prototypes -Wnested-externs -Woverride-init * C++ Compiler: i686-apple-darwin11-llvm-g++-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C++ Flags: -O2 -Werror -Wall -Wextra -Wpragmas -D_FORTIFY_SOURCE=2 -Waddress -Wchar-subscripts -Wcomment -Wctor-dtor-privacy -Wfloat-equal -Wformat=2 -Wmissing-field-initializers -Wmissing-noreturn -Wnon-virtual-dtor -Wnormalized=id -Woverloaded-virtual -Wpointer-arith -Wredundant-decls -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wstrict-overflow=1 -Wswitch-enum -Wundef -Wunused-variable -Wwrite-strings -fwrapv -ggdb * CPP Flags: -I/usr/local/include * Assertions enabled: no * Debug enabled: no * Warnings as failure: no * SASL support: Am I doing something incorrectly? Thanks.

    Read the article

  • Internal Code Signing: Key Distribution, or Certificate Server?

    - by Myrddin Emrys
    I should first note that we have nobody in IT with significant familiarity with self-signed certification. We have a moderately sprawling network (one forest, many locations), and we are now rolling out internal code signing; until now users have run untrusted code, or we even disabled(!) the warnings. Intranet applications, scripts, and sites will now be signed with self certification. I am aware of two obvious ways we can deploy this: Distributing the keys directly via a group policy, and setting up a cert server. Can someone explain the trade-offs between these two methods? How many certs before the group policy method is unwieldy? Are they large enough that remote users will have issues? Does the group policy method distribute duplicates on every login? Is there a better method I am not aware of? I can find a lot of documentation on certifications and various ways to create them, but I have not been able to find something that summarizes the difference between the distribution methods and what criteria make one or the other superior.

    Read the article

  • How to debug modsecurity_audit_log

    - by max87
    I was accessing www.example.com/RestAPI/index.php/tweets.json in my server. The modsec_audit.log showed the following error, but there is no related errors/warnings in modsec_debug.log. I could see the Internal Server error is logged in example-error_log. How can I debug this Internal Server error? --8560e90b-A-- [21/Mar/2012:07:01:52 +0000] T2l84H8AAAEAAGxPZ@QAAAAG x.x.x.x 33101 x.x.x.x 80 --8560e90b-B-- GET /RestAPI/index.php/tweets.json HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (X11; Linux i686; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Cookie: __utma=159129855.1463065063.1331789485.1331789485.1331789485.1; __utmz=159129855.1331789485.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); 8cb6a414cf5ec1919864de0e80bea4da=0es7dcu0p10cocfpferb2lddi0; 8926e4f3c475bb6fcacb409299f1bd27=53cf8c5e6bf78ea45096945377e6d609 Connection: keep-alive Cache-Control: max-age=0 --8560e90b-F-- HTTP/1.0 500 Internal Server Error X-Powered-By: PHP/5.3.5 Content-Length: 0 Connection: close Content-Type: text/html; charset=UTF-8 --8560e90b-H-- Apache-Handler: php5-script Stopwatch: 1332313312358005 130428 (- - -) Producer: ModSecurity for Apache/2.5.12 (http://www.modsecurity.org/); core ruleset/2.0.5. Server: Apache --8560e90b-Z--

    Read the article

  • MAMP Pro virtual hosts on Mountain Lion not being recognized

    - by user135242
    I'm running MAMP PRO 2.1.1 on OSX 10.8.1 and not having any luck getting Apache to recognize any hosts that have been set up in MAMP besides localhost. This only happens when trying to use port 80. When I switch to port 8888 everything runs fine. Mountain Lion's Apache has been disabled and I get no errors or warnings when starting MAMP and the servers. Any doc root I set for "localhost" runs without problem. Any other server name I define, however, results in "cannot connect" when viewing in Chrome (not to be confused with "cannot find" - the browser is in fact following /etc/hosts to 127.0.0.1, but MAMP's Apache is simply not responding) I was wondering if anyone else has run into this issue or knows how to solve it. I'm working on some WordPress development and it keeps wanting to redirect to the base URL (with no port reference) even during setup. I'm sure I could fix things from the WP side but I'd rather figure out what the root issue is with MAMP. Thanks in advance for any insight you can provide.

    Read the article

  • How To Investigate/Restore MySQL Permissions? MySQL ERROR 1045 (28000): Access denied for user

    - by Recc
    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Debian. mysqld is listening on 3306 supposedly Telnet to 3306 works Also tried binding it specifically yo localhost and then 127.0.0.1 which made no difference However: # netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 78993 /var/run/mysqld/mysqld.sock # mysql -P3306 -ptest ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Things I've tried: dpkg-reconfigure mysql-server-5.1 Doesn't help http://www.debian-administration.org/articles/442 Doesn't help This command (source): UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root'; FLUSH PRIVILEGES; Doesn't help, in fact: Query OK, 0 rows affected (0.00 sec) Rows matched: 0 Changed: 0 Warnings: 0 So might the user be deleted? Extremely unlikely as all this started after packages update a colleague did and some separate services started screwing around but my colleague said he removed the offenders. Theres more: while # mysqld_safe --skip-grant-tables is running one can access the data tables, only with the valid passwords! So there's users and some authentication takes place hence the 0 rows affected above. Can the privileges tables be damaged somehow and how can I recreate/restore them when my only way of getting a mysql console is to skip them? Can I spare my reinstall of MySQL? Either way I did get a dump of the DBs now that I could get in with the above mode.

    Read the article

  • How to calibrate ASUS k52f battery on Ubuntu?

    - by cutalion
    I'm not sure if the problem is in software or my battery is dying, I'll move my question to another forum if it's not a SO question I have a problem with the battery on my laptop - ASUS K52F. It shows incorrect information about capacity. When I unplug the charger it can work some time, but then it will power off without any warnings. Sometimes it will power off right after I unplug the charger. Here is some info I could get: > uname -a Linux alligator 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > acpi -i Battery 0: Charging, 99%, 18:25:15 until charged Battery 0: design capacity 5235 mAh, last full capacity 69964 mAh = 100% > cat /sys/class/power_supply/BAT0/uevent POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Charging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=9246000 POWER_SUPPLY_POWER_NOW=176000 POWER_SUPPLY_ENERGY_FULL_DESIGN=**48400000** POWER_SUPPLY_ENERGY_FULL=**646822000** POWER_SUPPLY_ENERGY_NOW=**643588000** POWER_SUPPLY_MODEL_NAME=K52F-44 POWER_SUPPLY_MANUFACTURER=ASUSTek POWER_SUPPLY_SERIAL_NUMBER= I noticed, that POWER_SUPPLY_ENERGY_NOW and POWER_SUPPLY_ENERGY_FULL are greater than POWER_SUPPLY_ENERGY_FULL_DESIGN. I don't think it's ok :) I can run any additional commands.

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Power Outage Interrupted Upgrade from Windows Vista Ultimate to 7 Ultimate, Reverted to Vista, Now Vista is failing... What Next?

    - by tednewk
    I was in the midst of what seemed to be a successful upgrade from Vista Ultimate to 7 Ultimate when there was a brief blackout. The upgrade failed and Windows reverted back to Vista. Now Vista is very slow to boot, has problems waking back-up from inactivity and quickly loses it's wireless connection. The wake-up problem manifests itself as the mouse is clearly shown on a black screen but I have no access to the Desktop or Taskbar or Explorer. Even Alt-Ctrl-Delete doesn't seem to work. No task menu, no reboot. Hitting the reset button reboots the machine with the usual Black Screen warnings offering Safe Mode. I tried to do a system restore to a point before the upgrade. That didn't seem to work. My guess is that my system is a mutant with parts of Vista and parts of 7 crashing each other. I would like avoid a clean install if at all possible to avoid reinstalling other software. What should I try now? My thoughts are: My a system back-up to lock the computer in place Trying a second 7 upgrade If that appears to be working make another back-up If not reload back-up and try a repairing Vista from DVD. If that appears to work make another back-up, let system stablize about a week then try 7 install again If that doesn't work are there any other options to try before settling for a clean install? Another complication, I am doing this by "remote control". I'm traveling with my job and I'll be talking my son through it over the phone. (Kind of like the landing the 747 cliche from all the 70's adventure shows!) So is there a way of simplifying the steps? Thanks Ted

    Read the article

  • Postfix connects to wrong relay?

    - by Eric
    I am trying to set up postfix on my ubuntu server in order to send emails via my isp's smtp server. I seem to have missed something because the mail.log tells me: Jan 19 11:23:11 mediaserver postfix/smtp[5722]: CD73EA05B7: to=<[email protected]>, relay=new.mailia.net[85.183.240.20]:25, delay=6.2, delays=5.7/0.02/0.5/0, dsn=4.7.0, status=deferred (SASL authentication failed; server new.mailia.net[85.183.240.20] said: 535 5.7.0 Error: authentication failed: ) The relay "new.mailia.net[85.183.240.20]:25" was not set up by me. I use "relayhost = smtp.alice.de". Why is postfix trying to connect to a different server? Here is my main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mediaserver alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = mediaserver, localhost.localdomain, , localhost relayhost = smtp.alice.de mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all myorigin = /etc/mailname inet_protocols = all sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_password smtp_sasl_security_options = noanonymous Output of postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 mydestination = mediaserver, localhost.localdomain, , localhost myhostname = mediaserver mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = relayhost = smtp.alice.de sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_generic_maps = hash:/etc/postfix/generic smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_password smtp_sasl_security_options = noanonymous smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Configuring postfix with Gmail

    - by MultiformeIngegno
    This is what I did.. sudo apt-get install postfix This is my /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = tsXXX561.server.topcloud.it alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only default_transport = smtp relay_transport = smtp inet_protocols = all # SASL Settings smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem Then I created the file /etc/mailname with my hostname as content: tsXXX561.server.topcloud.it Then I created the file /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:gmail_password Then sudo postmap /etc/postfix/sasl/passwd sudo cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart Still sends nothing... I'm on Ubuntu Server 12.04.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >