Search Results

Search found 9208 results on 369 pages for 'mail archive'.

Page 309/369 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Set postfix to send email but not to receive them

    - by CodeShining
    I'm using Google Apps to handle personal email addresses for my domain name, and I set up the DNS as Google suggests. All works fine. Now since I need a SMTP to send emails from my e-commerce I installed Postfix on the server. It works fine when I send emails to any email address but it doesn't send to the same domain name, so let's say my domain is example.com, I set postfix using example.com, if I try to reset a password using [email protected] postfix doesn't send and instead reports on the mail.log Sep 20 01:09:52 ip-10-54-26-162 postfix/pickup[6809]: B09A3415D8: uid=33 from=<www-data> Sep 20 01:09:52 ip-10-54-26-162 postfix/cleanup[6854]: B09A3415D8: message-id=<20120920010952.B09A3415D8@ip-10-54-26-162.eu-west-1.compute.internal> Sep 20 01:09:52 ip-10-54-26-162 postfix/qmgr[30978]: B09A3415D8: from=<[email protected]>, size=4234, nrcpt=1 (queue active) Sep 20 01:09:52 ip-10-54-26-162 postfix/local[6856]: B09A3415D8: to=<[email protected]>, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=5.1.1, status=bounced (unknown user: "myaccount") Of course it cannot find a local user "myaccount" since that account is on Google Apps... How can I tell Postfix to send the email and do not search for a local user?

    Read the article

  • 530 5.7.1 Client was not authenticated Exchange 2010 for some computers within mask

    - by user1636309
    We have a classic problem with Client not Authenticated but with a specific twist: We have an Exchange 2010 cluster, let's say EX01 and EX02, the connection is always to smtp.acme.com, then it is switched through load balancer. We have an application server, call it APP01 There are clients connected to the APP01. There is a need for anonymous mail relay from both clients and APP01. The Anonymous Users setting of the Exchange is DISABLED, but the specific computers - APP01 and clients by the mask, let's say, 192.168.2.* - are enabled. For internal relay, a "Send Connector" is created, and then the above IP addresses are added for the connector to allow computers, servers, or any other device such as a copy machine to use the exchange server to relay email to recipients. The problem is that the relay works for APP01 and some clients, but not others (we get "Client not Authenticated") - all inside the same network and the same mask. This is basically what we do to test it outside of our application: http://smtp25.blogspot.sk/2009/04/530-571-client-was-not-authenticated.html So, I am looking for ideas: What can be the reason for such a strange behaviour? Where I can see the trace of what's going on at the Exchange side?

    Read the article

  • Best shortcut in Total Commander

    - by life-warrior
    So, what's your favourite TC shortcut or shortcut combination ? Which one do you use and for what purpose ? Among my most often used: Ctrl-Left ( or Ctrl-Right ) - open archive or folder under cursor in opposite tab. Ctrl-Shift-Enter, Alt-F8, Ctrl-X - copy full file path to clipboard. Shift-F6, Shift-End(if needed), Ctrl-C - copy only file name w/o path. Select files, Ctrl-M - multi-rename, for example remove "DVDrip" from file names. Ctrl-\ - go to root directory. Ctrl-D, - go to directory with highlighted letter specified. For example, name a downloads directory "&Downloads" in favourites, and the letter after ampersand will be highlighted. Alt-F7, feed to listbox, Ctrl-A, Mark(menu)-Save selection to file - creates a file with all files and directories inside current, with full path. Ctrl-[3-6] - sort files by name(3), extension(4), date(5), size(6). For example, Sort by name, when you need movies and soundracks with the same name and different extension to group them together. Sort by extension, when you need to find EXEs in Windows directory. Sort by Date, when you need to find the latest file downloaded in your dir. Sort by size, when you need to delete the largest files for free space.

    Read the article

  • Keyboard's media keys are blocked by a program

    - by Mike Hanson
    I've got a Microsoft Natural Ergonomic Keyboard 4000. In addition to the regular keys, it's also got keys for Web/Home, Search, Mail, Favorites (5), Calculator, and Media functions (Mute, Volume Up/Down, and Play/Pause). Everything works most of the time, and the exception is rather odd. I use a programming system called Clarion. When that has focus, the Media keys don't work. (All the others still do.) I've also discovered that programs that I create using Clarion also block the media keys (only when they have focus). This indicates that it's probably something in Clarion's Run-Time Library (RTL) that's causing the trouble. The keys will work if I click on a non-Clarion window before hitting the media key, but that's an undesirable hassle. The odd thing is that I have many colleagues with the same keyboard, and they have no problem. When I recently upgraded from Vista Professional to Win7 Ultimate, I noticed that various things "appear" differently. For example, with my old system, when I changed the volume or muted the volume bar visualization always appeared at the bottom right on the screen. Now it doesn't appear in certain programs, even when it works. This indicates an order of precedence for visual elements. I'm fairly certain a similar order of precedence exists for keyboard hooks. Depending on how the hooks are defined, and the order in which they're applied, it would seem that sometimes the IntelliType drivers don't see the media keystrokes. The Media keys probably behave differently than the rest of the "special" keys, because they are more of a standard across all keyboards, so perhaps are handled by a different driver hooking mechanism. Does anyone have any suggestions of how I might fix this problem? Is there some way to change the order of hooks? Delay the loading of the IntelliType driver? Thanks in advance!

    Read the article

  • Why is my Drupal Registration email considered spam by gmail? (headers included)

    - by Jasper
    I just created a Drupal website on a uni.cc subdomain that is brand-new also (it has barely had the 24 hours to propagate). However, when signing up for a test account, the confirmation email was marked as spam by gmail. Below are the headers of the email, which may provide some clues. Delivered-To: *my_email*@gmail.com Received: by 10.213.20.84 with SMTP id e20cs81420ebb; Mon, 19 Apr 2010 08:07:33 -0700 (PDT) Received: by 10.115.65.19 with SMTP id s19mr3930949wak.203.1271689651710; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Return-Path: <[email protected]> Received: from bat.unixbsd.info (bat.unixbsd.info [208.87.242.79]) by mx.google.com with ESMTP id 12si14637941iwn.9.2010.04.19.08.07.31; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) client-ip=208.87.242.79; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) [email protected] Received: from nobody by bat.unixbsd.info with local (Exim 4.69) (envelope-from <[email protected]>) id 1O3sZP-0004mH-Ra for *my_email*@gmail.com; Mon, 19 Apr 2010 08:07:32 -0700 To: *my_email*@gmail.com Subject: Account details for Test at YuGiOh Rebirth MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; delsp=yes Content-Transfer-Encoding: 8Bit X-Mailer: Drupal Errors-To: info -A T- yugiohrebirth.uni.cc From: info -A T- yugiohrebirth.uni.cc Message-Id: <[email protected]> Date: Mon, 19 Apr 2010 08:07:31 -0700 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - bat.unixbsd.info X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [99 500] / [47 12] X-AntiAbuse: Sender Address Domain - bat.unixbsd.info X-Source: X-Source-Args: /usr/local/apache/bin/httpd -DSSL X-Source-Dir: gmh.ugtech.net:/public_html/YuGiOhRebirth

    Read the article

  • Autodiscover service seems to reply with User Principal Name instead of email address

    - by Jeff McJunkin
    After this latest round of Windows updates (on 1/11/11, in fact) my Exchange 2007 server of course rebooted. This may have had the side effect of making any changes I'd inadvertently made take effect. Since then, the Autodiscover service in Exchange 2007 from Outlook 2007 seems to reply with the User Principal Name ([email protected] instead of [email protected]). I'm specifically seeing this from within the "Test Email AutoConfiguration" tool in Outlook (the UPN appears in the first text box labeled "E-mail") and when creating a new profile in Outlook. If I disregard the UPN and instead fill in my email address, Autodiscover works as expected and I can connect without issue. I've confirmed using ADSI Edit that the SMTP email address is properly set for my users. I even went a bit crazy and set the UPN to the email address using ADSI Edit. I've re-installed the Client Access role on the server in question. Exchange server is Server 2008, 64-bit of course. Clients are mostly XP 32-bit, though the issue happens from a Windows 7 machine as well.

    Read the article

  • Notebook Operating System with extreme support cycles/security updates

    - by leto
    Hello there, after reading the announcements about Mac OS X "Lion" and Apples political decision, I've had enough. I'm a longtime Apple User since 1992, have always felt at home there, but am trying to switch to alternative Operating System since a year. I've also been working with Unix machines since 2001, so I'm looking in one of the free Unices or a Linux. Since I last looked at the desktop in 2002 choke much has changed, it seems. So I'm lost once more in the war between desktop environments and software. To be honest: I don't care what it's name is, I want to get my job done. Here's what I set me as landmark for an operating system/software to be considered: Has to be atleast four years old Has to supply security updates for current release for atleast a year Production quality stability for the whole desktop environment (!) No f****g commercial stuff that tends to supply me with privacy invading App Store or Cloud space So far I'm running a MacBook from 2007, 4 Gig memory, 250 Gig disk and I need: IMAPs for Mail since 1995 Webbrowser sic Shell Keeping current with Updates/Upgrades with no more than 5 Minutes spent in entering commands (makes it hard for OpenBSD ;-) ) A desktop filemanger would be nice, but is a bonus. What can you suggest as operating system? The one with the longest support cycles and best chance to survive the next 10 years will win a new user, even sending patches when needed :-) Greets

    Read the article

  • Recommendations for handling Directory Harvesting spam on Exchange 2003

    - by Aaron Alton
    Our Exchange server is getting slammed with anywhere between 450,000 and 700,000 spam messages per day. We receive about 1700 legitimate messages in the same time frame. Roughly 75% of the spam is directory harvesting. We currently have GFI MailEssentials installed. To it's credit, it's doing a very good job, but the sheer volume of spam that we're receiving, and the number of connections that our exchange server is making is preventing legitimate email from being delivered in a timely manner. GFI is set up to check for directory harvesting at the SMTP level, which I presume intercepts the mail before it hits the Exchange services , or goes through SMSE. This "module" is ordered at the top of the list, so (hopefully) dealing with the harvesting is consuming a minimum amount of server resources and bandwidth. My question is, is there anything I can do to prevent our Exchange server's connection pool from being eaten up by these spam hosts? We had to limit the number of concurrent connections being made by Exchange, because it was consuming all of our bandwidth. Thanks, in advance.

    Read the article

  • Trouble getting latest version of Git

    - by TheMethod
    I am using Ubuntu 10.04 LTS. I'm looking at using git as source control for personal projects and Github as a remote repository. I was having trouble pushing a commit to my remote github repo getting the following error message: The requested URL returned error: 403 while accessing https://github.com/Jstall/helloworld.git/info/refs When I did some digging I found that the problem could be me not having the latest version of Git. When I did a --version I found that I have version 1.7.0.4 locally. So I tried to update git using: sudo apt-get install git but get the following error: Reading package lists... Done Building dependency tree Reading state information... Done Package git is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package git has no installation candidate I've tried running: sudo apt-get update and trying again but it didn't seem to make a difference. I'm not sure if it's relevant but I'm also getting a couple of 404's when I run update: Err http://wine.budgetdedicated.com edgy/main Packages 404 Not Found Fetched 4,117B in 0s (5,142B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/edgy/universe/binary-i386/Packages.gz 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://wine.budgetdedicated.com/apt/dists/edgy/main/binary-i386/Packages.gz 404 Not Found I'm not sure when I should try next. Could anyone suggest a course of action to get this resolved? Any advice would be appreciated. Thanks much!

    Read the article

  • Motion - takes snapshot without motion detected

    - by Emmanuel Brunet
    I've been installed the standard motion 3.2.12 package on debian 7.5. I would like to get snapshot ONLY when motion is detected, but it still saves a picture every second without any activity in front of the camera. I'm using a TENVIS JPT3815W IP camera motion.conf here is my configuration file setup_mode off target_dir /media/videos/log/webcam netcam_url http://webcam/snapshot.cgi netcam_tolerant_check on netcam_userpass admin:alpha1237 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion off output_all off # detection settings 1-255 default 32 noise_level 50 # Maximum framerate for webcam streams (default: 1) webcam_maxrate 25 pre_capture 0 framerate 25 gap 30 locate on mail [email protected] text_right "FRONT CAMERA %Y/%m/%d - %T" text_double on ffmpeg_cap_new on ffmpeg_cap_motion on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) quality 90 # Restrict webcam connections to localhost only (default: on) webcam_localhost off # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 Issue when I start motion images are stored in /media/videos/log/webcam nearly every second. I hjust want to get images when a motion is detected and the according video clip Any idea where the configuration fails ?

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • Ubuntu purple splash screen with blinking pixels?

    - by joxnas
    I had ubuntu 9.10 I upgraded to 10.04 after solving some problems (freeze at boot). Since then, I don't have the ubuntu's logo showing up when I boot, but a purple screen with some blinking pixels. I didn't care much about it... but today my computer took too long at that screen (normally it was just 1/4 second, but today it was like a minute..). And it happened like 4 or 5 times in a row (Only at the 5th time I realised that it was not freezing up, but it simply would took more time) After a reboot, it is again 1/4 second of purple screen but I don't want this problem to return.. so I want to get rid of the purple screen (I think it is an indicator of the problem) Well, I already installed the graphic drivers (going to system admnistration hardware drivers). But it didn't solve anything. (I don't know if it is even related) I searched in google, found something old (2006) and I think it maybe has some relation with my problems .. http://ubuntuforums.org/archive/index.php/t-294692.html But couldn't understand the conversation (i'm a linux novice) Sorry for my horrible english.. I would appreciate any help! My hardware: ATI Mobility Radeon 4650 HD P7450 2.13Ghz Core 2 Duo

    Read the article

  • SQL Server Offsite Backups

    - by Eric Maibach
    We have about !TB of SQL Server databases, and these databases generate about 200GB of data changes each day. Up to this point we have been doing Weekly full backups, daily diff backups, and hourly transaction log backups. The full and diff backups are backed up to tape and taken offsite each day. We have been trying to move away from tapes, and our IT department purchased a Barracuda Backup device that backups up data and then sends it offsite using our internet connection. I have been trying to get this to work for our SQL Server backups, and have ran into a number of problems. I normally like to just use SQL Server to perform backups instead of trying to use a agent, so that is what I tried first. However the Barracuda device was not able to dedup these files very well, so it ended up being to much data to try to send offsite and to archive. I then tried installing the Barracuda agent and using it to backup the SQL Server databases. However the problem I am having there is that on some of the database servers I also have files that need backed up, and I cannot find a way to create seperate backup schedules for the file backups and the SQL Server backups. Barracuda only does full or transaction log backups. So if I want to do hourly transaction log backups I end up doing a file system backup every hour (which is not good), or if I only schedule the backups to run once a night I either have to do a full backup every night, or only do a transaction backup once a day. None of these scenarios are good options. My question is, how is everyone else getting their large SQL Server database backups offsite. Are you just using tape, or have you found a offsite backup device that works well? Is anybody else using Barracuda to backup their SQL Server databases? If you do, then how do you have it setup?

    Read the article

  • How to determine the Kerberos realm from an LDAP directory?

    - by tstm
    I have two Kerberos realms I can authenticate against. One of them I can control, and the other one is external from my point of view. I also have an internal user database in LDAP. Let's say the realms are INTERNAL.COM and EXTERNAL.COM. In ldap I have user entries like this: 1054 uid=testuser,ou=People,dc=tml,dc=hut,dc=fi shadowFlag: 0 shadowMin: -1 loginShell: /bin/bash shadowInactive: -1 displayName: User Test objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uidNumber: 1059 shadowWarning: 14 uid: testuser shadowMax: 99999 gidNumber: 1024 gecos: User Test sn: Test homeDirectory: /home/testuser mail: [email protected] givenName: User shadowLastChange: 15504 shadowExpire: 15522 cn: User.Test userPassword: {SASL}[email protected] What I would like to do, somehow, is to specify per-user basis to which authentication server / realm the user is authenticated against. Configuring kerberos to handle multiple realms is easy. But how to I configure other instances, like PAM, to handle the fact that some users are from INTERNAL.COM and some from EXTERNAL.COM? There needs to be an LDAP lookup of some kind where the realm and the authentication name is fetched from, and then the actual authentication itself. Is there a standardized way to add this information to LDAP, or look it up? Are there some other workarounds for a multi-realm user base? I might be ok with a single realm solution, too, as long as I can specify the user name - realm -combination for the user separately.

    Read the article

  • Understanding encryption Keys

    - by claws
    Hello, I'm really embarrassed to ask this question but its the fact that I don't know anything about encryption. I always avoided it. I don't understand the concept of encryption keys (public key, private key, RSA key, DSA key, PGP Key, SSH key & what not) . I did encounter these in regular basis but as I said I always avoided them. Here are few instances where I encountered: Creating Account: A public RSA or DSA key will be needed for an account. Send the key along with your desired account name to [email protected] I really don't know what are RSA/DSA or How to get their keys? Do I need to register some where for that? Mailing: I'm unable to recall exactly but I've seen some mails have some attachments like signature or the mail footer will have something called PGP signature etc.. I really don't get its concept. GIT Version control: I created account in assembla.com (for private GIT repo) and it asked me to enter "SSH keys" to my profile. Where am I gonna get these? Why do I need it? Isn't SSH related to remote login (like remote desktop or telnet)? How are these two SSHs related & differ? I don't know in how many more situations I'm going to encounter these things. I'm really confused and have no clue about where to start & how to proceed to learn these things. Kindly someone point me in correct direction. Note: I've absolutely zero interested in encryption related topics. So, there is no way I'm going to read a graduate level book on this subject. I just want to clear my concepts without going into much depth.

    Read the article

  • DNS propagation delay or bad configuration?

    - by Javier Martinez
    I have been waiting the DNS propagation for almost 24 hours. I'am no impatient, but I want to know if I configured my zone good or I have any error in it. I think that is good, because if I use my server dns like my DNS secondary I can resolve and lookup host well. ; ; BIND data file for mydomain.net ; $TTL 86400 @ IN SOA mydomain.net. mydomain.net. ( 20120629 ; Serial 10800 ; Refresh 3 hours 3600 ; Retry 1 hour 604800 ; Expire 1 week 86400 ) ; Negative Cache TTL ; @ IN NS ns1 @ IN NS ns2 IN MX 10 mail ns1 IN A 5.39.X.Y ns2 IN A 5.39.X.Z There is not any errors in /var/syslog about bind daemon. Is everything correct? Do I only need to wait up to 48 hours for the right DNS propagation? My nslookup from a remote machine with the nameserver of the bind host: $ nslookup mydomain.net Server: bind-host-ip Address: bind-host-ip#53 Name: mydomain.net Address: domain-ip

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • Outlook VBA script - find and replace text with image

    - by user2530616
    I have a e-commerce store. When I get a sale, I receive an order confirmation email which contains the name of the product sold. When the email comes through, I would like to run a script that replaces the product name eg. "red widget", with a picture of that product. Is that possible? I have found a similar code to replace text (set of numbers in this case) with a link, but I need it to replace with a picture instead. Option Explicit Sub InsertHyperLink(MyMail As MailItem) Dim body As String, re As Object, match As Variant body = MyMail.body Set re = CreateObject("vbscript.regexp") re.Pattern = "#[0-9][0-9][0-9][0-9][0-9][0-9]" For Each match In re.Execute(body) body = Replace(body, match.Value, "http://example.com/bug.html?id=" & Right(match.Value, 6), 1, -1, vbTextCompare) Next MyMail.body = body MyMail.Save End Sub example mail Order Confirmation Thanks for shopping with us today! ------------------------------------------------------ Order Number: 2209 Date Ordered: Friday 28 June, 2013 Products ------------------------------------------------------ 1 x red widget = $5.00 ------------------------------------------------------ Total: $0.00 Delivery Address xxx search text: "red widget" replace picture: redwidget.jpg

    Read the article

  • Plesk: Spamassassin ignores emails to redirected accounts

    - by Mat
    When I set up email redirects within Plesk 9.5, Spamassassin ignores all emails sent to the redirected address and only scans emails that are sent directly to the address which has a mailbox. Steps to reproduce Set up two mail accounts: [email protected] as a proper email account with a mailbox and [email protected] with all emails redirected to [email protected]. (It doesn't make a difference, if [email protected] has a mailbox enabled or not.) [email protected] -> [email protected] Set up the spam filter on both accounts. I set mine to delete spam right away, but you can just keep the default ("mark as spam"). Now, when you send an emails to [email protected], it will have Spamassassins tags in the email header, but when you send emails to [email protected], they will end up in the same mailbox but will have no spamassassin tags in the emails header and they will not be scanned. Other notes I am using Plesk 9.5.4 on Ubuntu 8.04 LTS with the default Qmail. I've observed this bug since Plesk 8, but I can't stand it any more and would appreciate any hack or fix.

    Read the article

  • SMTP server closes connection unexpectedly

    - by janin
    I'm writing a python program to send emails, and when trying to send to yopmail, hotmail and some other hosts the connection gets closed by the server without a message. I tried connecting directly with netcat and the same thing happens. Here's what the exchange looks like : $ nc smtp.yopmail.com 25 220 mx.yopmail.com ESMTP *** ehlo mx.myhost.com 250 SIZE 2048000 mail FROM:<[email protected]> 250 OK rcpt TO:<[email protected]> The connection is just closed abruptly at this point. On other hosts, like my ISP's, everything goes fine. I've checked the blacklists but my IP is not listed. Any idea what's going on? Edit: My IP is not listed in any blacklist. I own myhost.com, but I don't have an SPF record. I'll add one and update this post when the record has propagated. Edit 2: with the SPF added the email is now accepted and Hotmail adds a Authentication-Results: hotmail.com; sender-id=pass header to the email. However it gets classified as spam, but I guess that's another matter. Thanks for your help.

    Read the article

  • IIS SMTP server (Installed on local server) in parallel to Google Apps

    - by shaharru
    I am currently using free version of Google Apps for hosting my email.It works great for my official mails my email on Google is [email protected]. In addition I'm sending out high volume mails (registrations, forgotten passwords, newsletters etc) from the website (www.mydomain.com) using IIS SMTP installed on my windows machine. These emails are sent from [email protected] My problem is that when I send email from the website using IIS SMTP to a mail address [email protected] I don’t receive the email to Google apps. (I only receive these emails if I install a pop service on the server with the [email protected] email box). It seems that the IIS SMTP is ignoring the domain MX records and just delivers these emails to my local server. Here are my DNS records for domain.com: mydomain.com A 82.80.200.20 3600s mydomain.com TXT v=spf1 ip4: 82.80.200.20 a mx ptr include:aspmx.googlemail.com ~all mydomain.com MX preference: 10 exchange: aspmx2.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx3.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx4.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx5.googlemail.com 3600s mydomain.com MX preference: 1 exchange: aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt1.aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt2.aspmx.l.google.com 3600s Please help! Thanks.

    Read the article

  • Share Firefox/Thnderbird data between W7 and Linux Mint 12 in dual boot computer

    - by Albert
    I've just set up my laptop (where I had running only W7) with a dual boot to run Linux Mint 12 as well. I have a "Data" partition (apart from the required partitions for W7 and Linux) where I store pretty much everything that isn't software installations (music, videos, project files, etc). I seem to be able to access that NTFS partition totally fine from Mint (like I've always done with W7), which is cool because I can access all that stuff regardless of which OS I'm using. I would like to know if it's possible (and how) to go one step further and share programs data between the two OS. One example would be my Firefox and Thunderbird data. For example, in Firefox share my bookmarks (and if I could share history, autocomplete and all that stuff, that would be awesome). In thunderbird, be able to share my mail and configuration, seeing the same inbox, folders, message rules, etc... So if I receive/send an email from W7 and later switch to Mint, I can see that email as it had been received/sent from Mint, and vice versa. Is this even possible? Or am I asking for too much convenience? If it's possible, any clues on how to set it all up?

    Read the article

  • fail2ban regex working but no action being taken

    - by fpghost
    I have the following snippet of fail2ban configuration on Ubuntu 13.10 server: #jail.conf [apache-getphp] enabled = true port = http,https filter = apache-getphp action = iptables-multiport[name=apache-getphp, port="http,https", protocol=tcp] mail-whois[name=apache-getphp, dest=root] logpath = /srv/apache/log/access.log maxretry = 1 #filter.d/apache-getphp.conf [Definition] failregex = ^<HOST> - - (?:\[[^]]*\] )+\"(GET|POST) /(?i)(PMA|phptest|phpmyadmin|myadmin|mysql|mysqladmin|sqladmin|mypma|admin|xampp|mysqldb|mydb|db|pmadb|phpmyadmin1|phpmyadmin2|cgi-bin) ignoreregex = I know the regex is good, because if I run the test command on my access.log: fail2ban-regex /srv/apache/log/access.log /etc/fail2ban/filter.d/apache-getphp.conf I get a SUCCESS result with multiple hits, and in my log I see entries like 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 301 585 "-" "-" 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 301 593 "-" "-" Secondly I know email is configured correctly, as each time I service fail2ban restart I get an email for each of the filters stopping/starting. However despite all this no action seems to be taken when one of these requests comes in. No email with whois, and no entries in iptables. What possibly could be preventing fail2ban from taking action? (everything looks in order in fail2ban-client -d and I can see the chains have loaded with iptables -L)

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • CLI package to replace Plesk

    - by dotancohen
    Myself and another programmer are tasked with maintaining a few webservers. I prefer CLI tools, she prefers Plesk. However, I am adamant about not installing Plesk for quite a few reasons. I have written a small Python script for adding new domains, and now I am about to add the ability to configure email addresses while abstracting the details of Postfix from her. Before I go that route, I have googled to see if anything already exists, and am surprised that I have come up with nothing! Are there any mature, stable "control panels" or "server admin" tools like Plesk, but which are accessed via the CLI over SSH? I am looking for the following features: Add / remove / configure domains served by Apache. Add / remove / configure email boxes and mail groups. Add / remove MySQL databases, users, and configure users to databases. Provide basic monitoring of "server health", that is: memory usage, disk usage, CPU usage, bandwidth usage. Possibly set up STFP accounts so that only specific FTP users could access specific /var/www/someSite/ directories. Note that I was unsure if this question is OT for ServerFault. As per the ServerFault about page (There seems to be no more FAQ) this question meets two of the "ask about" criterion and zero of the "don't ask about" with the possible exception of being opinion-based. Therefore, to keep on-topic, I would like to know about the available applications but we should be subjective and less opinionated. Thank you!

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >