Search Results

Search found 15535 results on 622 pages for 'mat keep'.

Page 34/622 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • How do I permanently delete e-mail messages in the sendmail queue and keep them from coming back?

    - by Steven Oxley
    I have a pretty annoying problem here. I have been testing an application and have created some test e-mails to bogus e-mail addresses (not to mention that my server isn't really set up to send e-mail anyway). Of course, sendmail is not able to send these messages and they have been getting stuck in the sendmail queue. I want to manually delete the messages that have been building up in the queue instead of waiting the 5 days that sendmail usually takes to stop retrying. I am using Ubuntu 10.04 and /var/spool/mqueue/ is the directory in which every how-to I have read says the e-mails that are queued up are kept. When I delete the files in this directory, sendmail stops trying to process the e-mails until what appears to be a cron script runs and re-populates this directory with the messages I don't want sent. Here are some lines from my syslog: Jun 2 17:35:19 sajo-laptop sm-mta[9367]: o530SlbK009365: to=, ctladdr= (33/33), delay=00:06:27, xdelay=00:06:22, mailer=esmtp, pri=120418, relay=e.mx.mail.yahoo.com. [67.195.168.230], dsn=4.0.0, stat=Deferred: Connection timed out with e.mx.mail.yahoo.com. Jun 2 17:35:48 sajo-laptop sm-mta[9149]: o4VHn3cw003597: to=, ctladdr= (33/33), delay=2+06:46:45, xdelay=00:34:12, mailer=esmtp, pri=3540649, relay=mx2.hotmail.com. [65.54.188.94], dsn=4.0.0, stat=Deferred: Connection timed out with mx2.hotmail.com. Jun 2 17:39:02 sajo-laptop CRON[9510]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) Jun 2 17:39:43 sajo-laptop sm-mta[9372]: o52LHK4s007585: to=, ctladdr= (33/33), delay=03:22:18, xdelay=00:06:28, mailer=esmtp, pri=1470404, relay=c.mx.mail.yahoo.com. [206.190.54.127], dsn=4.0.0, stat=Deferred: Connection timed out with c.mx.mail.yahoo.com. Jun 2 17:39:50 sajo-laptop sm-mta[9149]: o51I8ieV004377: to=, ctladdr= (33/33), delay=1+06:31:06, xdelay=00:03:57, mailer=esmtp, pri=6601668, relay=alt4.gmail-smtp-in.l.google.com. [74.125.79.114], dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. Jun 2 17:40:01 sajo-laptop CRON[9523]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Does anyone know how I can get rid of these messages permanently? As a side note, I'd also like to know if there is a way to set up sendmail to "fake" sending e-mail. Is there?

    Read the article

  • Why Does My iMac Keep Setting the Screen Brightness to 'Full'?

    - by TomB
    I have a 2 week old 24" iMac running Mac OS X 10.6. It is the primary monitor with the menu bar on the top of display. I have an external monitor as well, a 19" Viewsonic LCD. The LCD is set to the left side of the iMac, rotated 90 degrees CCW and has the Dock along the far left edge. When I restart the screen brightness on the iMac reverts to Full brightness. The Viewsonic LCD retains the setting I have for it. I am using a Mini DVI to DVI cable for the external display. I even tried setting my Huey Pro to do automatic screen adjustment based on ambient lighting but the iMac still goes to stun with a reboot. I am sure it is something dumb I have overlooked.

    Read the article

  • How to keep time on resumed KVM guest with libvirt?

    - by Hristo Hristov
    On my host I am using libvirt and a KVM guest. When the host is shutting down, libvirt suspends the guest. When the host is starting up, libvirt resumes the guest. The problem is, if the guest is suspended and resumed after 24 hours for example, then the guest time is 24 hours in the past. I thought that maybe the problem is with the clocksource, but it is set to "kvm-clock" already. $ cat /sys/devices/system/clocksource/clocksource0/available_clocksource kvm-clock tsc hpet acpi_pm $ cat /sys/devices/system/clocksource/clocksource0/current_clocksource kvm-clock

    Read the article

  • How can I keep the code formated as original source when I paste them to vim?

    - by SpawnST
    When I copy some code from webpages and paste it to VIM,I find it becomes a mess style like a ladder as follows xxxxxx xxxxxx xxxxxx xxxxxxxxxx Since it messed so regularly so I think maybe there's something wrong with my .vimrc which is as below: set number set nocompatible set nowritebackup set noswapfile syntax on filetype indent on filetype plugin on filetype on set background=light set autoindent set smartindent set tabstop=4 set shiftwidth=4 set showmatch set guioptions=T set fileencodings=utf-8,prc set ruler set incsearch map gs :%s set t_Co=256 :colorscheme evening filetype plugin indent on Usually I write python in VIM.And help would be appreciated.

    Read the article

  • How do I keep multiple copies of Outlook in sync when using RPC over HTTP?

    - by Don
    I use Outlook 2007 at work with our Exchange 2003 server. I just setup my home system with Outlook 2007 so that I could use the RPC over HTTP to access Exchange without having to use a VPN. It works fine. I can get mail, send mail, etc. What it doesn't seem to be doing is staying in sync. For example, I read a few messages at home, moved them into different folders from the Inbox, etc. That all seemed fine. When I login to my work machine and look at the copy of Outlook there, the mail is still unread and nothing has been moved. Am I missing something simple here? I would have to assume that my home machine should be telling Exchange where these messages belong and that they've been read. Both machines are running Windows 7, if that matters. Ideas?

    Read the article

  • vim: how do I keep 10 lines visible when scrolling up to EOF with CRTL-F ???

    - by Gaston
    Hello! I am used to use vi, not vim. What I find annoying in vim is that when you are scrolling with CTRL-F and reach EOF, vim scrolls down to the very last line and put this line on the top of your screen, and you can't see the lines above. You must scroll up a little bit so you can see the context. All this happens with CTRL-F only, not with j or the down cursor key. In vi, you scroll down (with CTRL-F), but when you reach EOF it still show you, say, 15 lines and then the tippical ~. I am using Putty for remote access. How can I config vim to behave like vi in this case? Hope you understand the question. Thank you! Gaston.

    Read the article

  • How can I keep websites from knowing where I live?

    - by D Connors
    This questions is related to issues and practicality, not security. I live in Brazil and, apparently, every single website I visit knows about it. Usually that's ok, but there are quite a few sites that don't make use of that information adequately. For instance: Bing keeps thinking that brazilian pages are way more relevant to me than american ones (which they're not). Google.com always redirects me to google.com.br. Microsoft automatically sends me to horribly translated support pages in portuguese (which would just be easier to read in english). These are just a few examples. Usually it's stuff I can live with (or work around), but some of them are just plain irritating. I have geolocation disabled in firefox, so I guess they're either getting this information from my IP or from windows itself (which I bought here). Is there a way to avoid this? Either tell them nothing or make them think I live somewhere else? Thanks

    Read the article

  • Why does iChat Server keep connecting to proxy.eu.jabber.org?

    - by Tom Hamming
    I have OS X Server 10.6.5 running on a new Mac Mini (server model), serving several functions among which is iChat Server (iChat and Pidgin on Windows as clients). In the iChat log in Server Admin, I kept seeing entries about connecting to proxy.eu.jabber.org. It's for our office network and I wasn't excited about external access to it, so I disabled server-to-server XMPP federation and now the connections just time out. But why is it doing that in the first place? Sample log entry: (datetime) (servername)jabberd/resolver[portnum]: [xmpp-server._tcp.proxy.eu.jabber.org resolved to 208.68.163.220:5269 (300 seconds to live) then: sending dialback auth request for route '(full server hostname)/proxy.eu.jabber.org' A couple minutes later, it comes back with: dialback for outgoing route '(full server hostname)/proxy.eu.jabber.org' timed out

    Read the article

  • Notepad++: is there a way to force it to keep the autoindented-whitespace type?

    - by daVe
    I wonder if we can force notepad++ to respect the previous whitespace character when it autoindents a new line: list[CR][LF] ····item1[CR][LF] ····item2[CR][LF] --->| (notepadd++ screenshot recreation showing hidden characters, because I don't have enough reputation to post images, sorry xP) If I am indenting with tabs I want a tab when notepad++ does an autoindent. But if I am indenting with spaces, I do want spaces.

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • Can dd-wrt or tomato keep track of GB usage per billing period per device?

    - by Sam Hasler
    I want to be able to track the usage of each device connecting to our router so we can split up the ISP bill by usage. Can dd-wrt or tomato provide the stats I'd need to do this? Update: After a bit of googling I'm aware of a much better answer than the current one. However I suspect there's probably more answers out there for other firmwares so in the interests of getting a more diverse set of answers—­and, I'll admit, because I'm getting tired of reading through obtuse firmware documentation—I've put up a bounty. If the only answer added is the one I've found I'll be happy to accept it for the bounty, otherwise I'll add it and accept it myself, but I'm hoping for an even better answer, or at least some options for other firmwares as from looking I've seen a few other people have asked for this and there doesn't appear to be a definitive answer, let's make this it! Go lazywebs! (Sorry. I've always wanted to say that.)

    Read the article

  • Programs keep waiting for external disk to spin up - how to ignore disk?

    - by Andrew J. Brehm
    Like many Mac users I have an external Firewire disk hooked up to my Mac to be used by Time Machine. This works very well, backup-wise. The problem is that very often when I use a Mac application and try to open a file, the file selection dialogue window hangs until the external disk has spun up. I never ever want to open a file on the external disk. Sometimes this happens even when I just want to save a file I already saved (i.e. type something and press meta-s). Is there anything I can do about this?

    Read the article

  • How do I keep a bridge enabled on a bonded interface?

    - by jlawer
    I'm working on setting up a pair of CentOS 6.3 servers that will run a couple of KVM vms and have come across a problem setting up a bridge on a bond. I am using Mode 4 (802.3ad) bonding on a pair of stacked Dell Powerconnect 5524 switches connecting to R320 servers. There are 2 links (1 to each switch) that form a Link Aggregation Group (802.3ad / LACP bonding). On top of the bond I have VLAN Tagging. I've verified this is a problem on multiple other bonding modes so it isn't just a mode 4 issue. I am testing what happens when 1 link is dropped (ie switch dies, cable breaks, etc). If I don't have a bridge (for KVM), everything works fine, failover happens as expected. If I have the bridge enabled, it works fine until failover (unplugging a cable). When failover happens /var/log/messages shows the slave link going down, followed within a second by: kernel: br1: port 1(bond0.8) entering disabled state The thing is /proc/net/bonding/bond0 shows the link is up as expected (simply with only 1 slave instead of 2). If I plug the cable back in it recovers and brings the bridge back to an enabled state. I actually have tested this while a ping is occuring and if the timing is right a packet will actually leave the system after the link is lost, but before the disabled message occurs. This disabled state I assumed was STP, but I have disabled STP on the bridge configuration and this issue still occurs. brctl showstp br1 still shows the link as disabled when it is running without a slave. I also switched between the nics in the server (I have 2x Broadcom & 4x intel). It doesn't matter which configuration I have. Does anyone know of a way to force the bridge to stay enabled or why its detecting the bond as disabled, when it isn't?

    Read the article

  • Why does iChat Server keep connecting to proxy.eu.jabber.org?

    - by Tom Hamming
    I have OS X Server 10.6.5 running on a new Mac Mini (server model), serving several functions among which is iChat Server (iChat and Pidgin on Windows as clients). In the iChat log in Server Admin, I kept seeing entries about connecting to proxy.eu.jabber.org. It's for our office network and I wasn't excited about external access to it, so I disabled server-to-server XMPP federation and now the connections just time out. But why is it doing that in the first place? Sample log entry: (datetime) (servername)jabberd/resolver[portnum]: [xmpp-server._tcp.proxy.eu.jabber.org resolved to 208.68.163.220:5269 (300 seconds to live) then: sending dialback auth request for route '(full server hostname)/proxy.eu.jabber.org' A couple minutes later, it comes back with: dialback for outgoing route '(full server hostname)/proxy.eu.jabber.org' timed out

    Read the article

  • Is it possible to rewrite some query strings to HTTPS and keep everything else on HTTP?

    - by Matt
    I'm rewriting query strings to pretty URIs, example: index.php?q=/en/contact becomes /en/contact and all works nicely.. # httpd.conf # HANDLE THE QUERY RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] Is it even possible to rewrite single queries to force https and force everything else onto http? I've tried many different approaches that typically end in infinate loops. I could write a plugin to do this in PHP but figured it would be more effecient to handle this in the server conf. I'd be greatful for any advice. EDIT: To clarify, I'd like to be able to rewrite the non SSL http://example.com/index.php?q=/en/contact to the SSL enabled https://example.com/en/contact and every query that is not /en/contact get written to http://example.com/...

    Read the article

  • EC2 instances keep becoming inaccessible via SSH, can I use elastic loadbalancer to check SSH connectivity?

    - by Rick
    This is mainly an issue for my development ec2 server as it seems that my instance keeps becoming inaccessible via SSH. It happened yesterday so I killed that one and started a new one and happened again later today. The server still works, my web application is accessible in a web browser but whenever I try to connect via SSH I get a pemrission denied (public key) error message in my terminal. I am 100% sure I am doing nothing wrong as I can create a new instance of the exact same AMI (its a personal custom AMI), change absolutely nothing, including using the same .pem key, and then am able to SSH into that new instance using the exact same command as before (just changing the IP address). I understand that ec2 can have issues but having this happen every day seems a bit odd.. I am using an m2.xlarge instance so I don't know if these tend to be unstable, in the past I have used a small instance and had it running for months with no problems which is why I find this so odd. I am looking into using loadbalancing but it seems the only "health" checks they offer is for http or tcp so I'm not sure if I can make it monitor for SSH connectivity. This is important for development as I may make 1-2 new pushes of an application a day and use SSH to do this. I have a designer that needs to have the app always accessible as he works with the front-end files to test output with the live application. Anyways, any advice / info is appreciated

    Read the article

  • How to configure sudoers to always keep LD_LIBRARY_PATH envrionment variable?

    - by Yanick Girouard
    No matter what I try, it seems that the LD_LIBRARY_PATH environment variable is not kept after I run a command with sudo. The only way I managed to have it stick, is to prefix my sudo command with LD_LIBRARY_PATH=/the/path whenever I call it from the command-line, but I would like to not have to do this every time. It seems the env_keep option ignores this variable, and so does the exempt_group option. My %group currently has ALL=(ALL) NOPASSWD:ALL as its access in sudoers. I would like this specific environment variable to be kept for any command I run. How can I do this? My server is running Red Hat Enterprise Linux 5.7.

    Read the article

  • How to keep Varnish cached populate after backend down for an extended period?

    - by Nicholas Tolley Cottrell
    We have Varnish 3.0.2 running on Amazon's Linux and it works great. We have a ttl of 48 hours for most content pages and much longer for images, PDFs etc. This weekend we've taken the backend down for some maintenance, so I upped the ttl to 5 days earlier in the week. I had assumed that anything in cache would continue to be served for up to 5 days, but much to our disappointment we checked varnishstat this morning and the cache was almost completely empty and varnish was serving "page not found" messages. I know that this is not what Varnish is designed to do, but why would it reset its cache when the backend is down? And how can I prevent it for next time?

    Read the article

  • How can I keep gnu screen from becoming unresponsive after losing my SSH connection?

    - by Mikey
    I use a VPN tunnel to connect to my work network and then SSH to connect to my work PC running cygwin. Once logged in I can attach to a screen session and everything works great. Now, after a while, I walk away from my computer and sooner or later, the VPN tunnel times out. The SSH connection on each end eventually times out and then I eventually come back to my computer to do some work. Theoretically, this should be a simple matter of just restarting the VPN, reconnecting via SSH, and then running "screen -r -d". However apparently when the sshd daemon times out on the cygwin PC, it leaves the screen session in some kind of hung state. I can reproduce a similar hung state by clicking the close box on a cygwin bash shell window while it's running a screen session. Is there any way to get the screen session to recover once this has happened, so that I don't lose anything?

    Read the article

  • Does anyone know why rsync would keep sending the files over and over again?

    - by beagleguy
    I'm trying to using rsync to backup some files, about half a TB. It's now it a state where it keeps sending the same files everytime it runs. for example: rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt I then verify those files are copied over... then the next time it runs it does the same thing rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt any idea why it's getting stuck on these files? I've tried to wipe the whole dest directory out and start over but no luck. thanks,

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • How to keep subtree removal (`rm -rf`) from starving other processes for Disk I/O?

    - by David Eyk
    We have a very large (multi-GB) Nginx cache directory for a busy site, which we occasionally need to clear all at once. I've solved this in the past by moving the cache folder to a new path, making a new cache folder at the old path, and then rm -rfing the old cache folder. Lately, however, when I need to clear the cache on a busy morning, the I/O from rm -rf is starving my server processes of disk access, as both Nginx and the server it fronts for are read-intensive. I can watch the load average climb while the CPUs sit idle and rm -rf takes 98-99% of Disk IO in iotop. I've tried ionice -c 3 when invoking rm, but it seems to have no appreciable effect on the observed behavior. Is there any way to tame rm -rf to share the disk more? Do I need to use a different technique that will take its cues from ionice? Update: The filesystem in question is an AWS EC2 instance store (the primary disk is EBS). The /etc/fstab entry looks like this: /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >