Search Results

Search found 6517 results on 261 pages for 'reverse dns'.

Page 211/261 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • 553-Message filtered - HELO Name issue?

    - by g18c
    I am having major issues sending from my SBS2011 machine to Message labs server-13.tower-134.messagelabs.com #553-Message filtered. Refer to the Troubleshooting page at 553-http://www.symanteccloud.com/troubleshooting for more 553 information. (#5.7.1) ## I have changed the IP and hostnames from the below. I am not on any IP or domain blacklists. I have setup SPF (which includes mailchimp servers): v=spf1 mx a ip4:95.74.157.22/32 a:remote.mydomain.com include:servers.mcsv.net ~all I am sure i have setup my HELO names correctly under the Exchange Management console, sending a test email from the SBS server and looking at the header shows the following: X-Orig-To: [email protected] X-Originating-Ip: [95.74.157.22] Received: from [95.74.157.22] ([95.74.157.22:52194] helo=remote.mydomain.com) by smtp50.gate.ord1a.rsapps.net (envelope-from <[email protected]>) (ecelerity 2.2.3.49 r(42060/42061)) with ESMTP id 11/90-10010-E529C835; Mon, 02 Jun 2014 11:04:09 -0400 Received: from MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef]) by MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef%10]) with mapi id 14.01.0438.000; Mon, 2 Jun 2014 19:03:56 +0400 Is is the main helo name there OK and do i need to worry about the second Received block where the MYSBSVR.mydomain.local is mentioned? I have asked the ISP to set the reverse DNS for my IP to remote.mydomain.com but they have instead put remote.MYDOMAIN.com - would this case cause HELO lookups to classify this as not matching? Anything else I can do to find out why i am being filtered?

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • Running a service with a user from a different domain not working

    - by EWood
    I've been stuck on this for a while, not sure what permission I'm missing. I've got domain A and domain B, A trusts B, but B does not trust A. I'm trying to run a service in domain A with a user account from domain B and I keep getting Access is Denied. I'm using the FQDN after the username and the password is correct. The user account from domain B is a local administrator on the domain A server, the user account has the logon locally, and as a service permissions. Must. Get. This. Working. Update: I found something interesting in the logs I must have missed. This ought to get me pointed in the right direction. Event ID: 40961 - LsaSrv : The Security System could not establish a secured connection with the server ldap/{server fqdn/fqdn@fqdn} No authentication protocol was available. I've found a few fixes for 40961 but nothing has worked so far. I've verified reverse lookup zones. nslookup resolves the correct dc properly. still workin' at it. Upadte: In response to Evan; I ran " runas /env /user:ftp_user@fqdn "notepad" " then entered the users password and notepad came up. It seems to work successfully. This issue is now resolved. The problem is visible in the screenshot. Windows tries to use the UPN for the user account if you dig your user out of AD with the Browse button. This fails every time even with the right user and password. Simply using the SAM format (Domain\User) works. So simple, yet so annoying. Can't believe I missed this. Thanks to everyone who helped.

    Read the article

  • smtpd_helo_restrictions = ..., reject_unknown_helo_hostname occasionally rejects mail I care about, how to handle?

    - by lkraav
    I have configured my postfix as follows: smtpd_helo_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unknown_helo_hostname This is working well because most spambots don't seem to have correct reverse lookups. But every once in a while I run into mail I care about getting reject, because the mail source server admin doesn't care about configuring his server correctly. For example here the server introduces itself as "srv1.xbmc.org" which has no DNS record and fails my basic check. Jan 6 04:42:36 mail postfix/smtpd[660]: connect from xbmc.org[205.251.128.242] Jan 6 04:42:37 mail postfix/smtpd[660]: NOQUEUE: reject: RCPT from xbmc.org[205.251.128.242]: 450 4.7.1 <srv1.xbmc.org>: Helo command rejected: Host not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<srv1.xbmc.org> I have tried to contact the server admin several times, but there is no response. What is the optimal way to handle this from my side? Is adding these "special" hosts to mynetworks = my only option? Is perhaps my whole smtpd_helo_restrictions setup wrong in some significant way?

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • ssd firmware, linux: updating large batch of drives

    - by wryfi
    I was recently hit with a fatal firmware bug that affected dozens of Crucial SSDs deployed in my datacenter. Many of the affected machines use LSI or other proprietary SAS controllers, which Crucial's bootable ISO does not recognize. None of the affected machines has a Windows license. The story is roughly similar for other SSD mfrs, including Samsung and Intel. To resolve this issue, I was forced to stop each machine, remove the affected SSD, remove the SSD from its hotswap caddy, install it temporarily into my ThinkPad, flash the firmware, reverse, rinse, repeat. It took the better part of a day to get through all the affected devices. I am looking for hardware, software, and/or purchasing strategies to ease this pain, as SSD firmware bugs seem inevitable, and our SSD footprint is growing. My first thought is to get a laptop with eSATA and one of these cables (http://www.newegg.com/Product/Product.aspx?Item=N82E16812311004). That should at least make it so I don't have to remove the drives from their caddies. Surely others have run into this. Any novel solutions?

    Read the article

  • What are the frequencies of current in computers' external peripheral cables and internal buses?

    - by Tim
    From Wikipedia, three different cases of current frequency are discussed along with the types of cables that are suitable for them: An Extra Ordinary electrical cables suffice to carry low frequency AC, such as mains power, which reverses direction 100 to 120 times per second (cycling 50 to 60 times per second). However, they cannot be used to carry currents in the radio frequency range or higher, which reverse direction millions to billions of times per second, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the power from reaching the destination. Transmission lines use specialized construction such as precise conductor dimensions and spacing, and impedance matching, to carry electromagnetic signals with minimal reflections and power losses. Types of transmission line include ladder line, coaxial cable, dielectric slabs, stripline, optical fiber, and waveguides. The higher the frequency, the shorter are the waves in a transmission medium. Transmission lines must be used when the frequency is high enough that the wavelength of the waves begins to approach the length of the cable used. To conduct energy at frequencies above the radio range, such as millimeter waves, infrared, and light, the waves become much smaller than the dimensions of the structures used to guide them, so transmission line techniques become inadequate and the methods of optics are used. I wonder what the frequencies are for the currents in computers' external peripheral cables, such as Ethernet cable, USB cable, and in computers' internal buses? Are the cables also made specially for the frequencies? Thanks!

    Read the article

  • Howto disable SSH local port forwarding ?

    - by SCO
    I have a server running Ubuntu and the OpenSSH daemon. Let's call it S1. I use this server from client machines (let's call one of them C1) to do an SSH reverse tunnel by using remote port forwarding, eg : ssh -R 1234:localhost:23 login@S1 On S1, I use the default sshd_config file. From what I can see, anyone having the right credentials {login,pwd} on S1 can log into S1 and either do remote port forwarding and local port forwarding. Such credentials could be a certificate in the future, so in my understanding anyone grabbing the certificate can log into S1 from anywhere else (not necessarily C1) and hence create local port forwardings. To me, allowing local port forwarding is too dangerous, since it allows to create some kind of public proxy. I'm looking for a way tto disable only -L forwardings. I tried the following, but this disables both local and remote forwarding : AllowTcpForwarding No I also tried the following, this will only allow -L to SX:1. It's better than nothing, but still not what I need, which is a "none" option. PermitOpen SX:1 So I'm wondering if there is a way, so that I can forbid all local port forwards to write something like : PermitOpen none:none Is the following a nice idea ? PermitOpen localhost:1

    Read the article

  • How do I uncompress vmlinuz to vmlinux?

    - by Lord Loh.
    I have already tried uncompress, gzip, and all other solutions that come up as google results and these have not worked for me. To get just the image search for the GZ signature - 1f 8b 08 00. > od -A d -t x1 vmlinuz | grep '1f 8b 08 00' 0024576 24 26 27 00 ae 21 16 00 1f 8b 08 00 7f 2f 6b 45 so the image begins at 24576+8 => 24584. Then just copy the image from the point and decompress it - > dd if=vmlinuz bs=1 skip=24584 | zcat > vmlinux 1450414+0 records in 1450414+0 records out 1450414 bytes (1.5 MB) copied, 6.78127 s, 214 kB/s Got these instructions verbatim from a forum online: http://www.codeguru.com/forum/showthread.php?t=415186 This process does not work for me and end up giving errors that states file not found 0024576 and all subsequent numbers. How do I proceed extracting vmlinux from vmlinuz? Thank you. EDITED: This is a reverse engineering question. I have no access to the distro to install any RPM or recompile. I start with nothing but vmlinuz.

    Read the article

  • Apache Proxy and Rewrite, append directory to URL

    - by ionFish
    I have a backend server running on http://10.0.2.20/ from the local network, which serves similar to this: / (root) | |_user1/ | |_www/ | |_private/ | |_user2/ |_www/ |_private/ (etc.) Accessing http://10.0.2.20/user1/ of course, contains 'www' and 'private' folders and is proxied through a public server using Apache's Reverse Proxy. I'd like it so the following happens: http://public-proxy-server/user1/ actually shows the content from http://10.0.2.20/user1/www/ without indicating it in the URL. (/private/ would not be accessible via the public proxy server). The key here, is for it to be dynamic, so all requests to http://public-proxy-server/*/should show content from http://10.0.2.20/*/www/. Again, the proxy currently works fine; below is the config: (On the public server) <VirtualHost *:80> ServerName www.domain.com ProxyRequests Off ProxyPreserveHost On ProxyVia full ProxyPass / http://10.0.2.20/ ProxyPassReverse / http://10.0.2.20/ </VirtualHost> (On the backend server) <VirtualHost *:80> ... #this directory contains folders 'user1' and 'user2' DocumentRoot /var/www/ ... </VirtualHost>

    Read the article

  • Single m0n0wall - Two LAN Subnets - How To Setup

    - by SnAzBaZ
    I have two LAN subnets that I need to link together they are 192.168.4.0/24 and 192.168.5.0/24 There is a m0n0wall running on 192.168.4.1. It's LAN connection goes out to our network switch, and it's WAN port goes out to our ADSL modem. WAN is connected via PPPoE. The 192.168.4.0 subnet contains all of our office workstations. The 192.168.5.0 subnet contains development servers and test machines that need to obtain internet access and be "managed" by computers on the 192.168.4.0 subnet, but need to be on their own subnet as well. I have a Draytek 2820N configured on 192.168.5.1 with it's WAN2 port configured as 192.168.4.25 and a default gateway of 192.168.4.1. Machines on the 5.0 subnet can connect to the internet via the m0n0wall just fine. I configured a static route on the m0n0wall LAN interface, Network 192.168.5.0/24 and Gateway 192.168.4.25. Machines on the 5.0 subnet can ping machines on the 4.0 network but the reverse does not work. I configured a new firewall rule on the m0n0wall that allows any traffic on the LAN interface with a source IP of 192.168.4.25 to be allowed. The DrayTek firewall is currently configured to pass all traffic regardless. When I try to ping a machine in the 5.0 subnet from 4.0 I see this in my m0n0wall log: BLOCK 14:45:27.888157 LAN 192.168.4.25 192.168.4.37, type echoreply/0 ICMP So the reply is being sent from the 5.0 subnet but is not being allowed to reach my workstation because the firewall is blocking it. Why is the firewall blocking it ? I hope the explanation of my network is clear, please ask if you require further clarification. Thank you.

    Read the article

  • Two servers, two domains, one ip. mod_proxy beginner

    - by Gutsav
    I run two virtual web servers (both running apache2 on debian). I have just one external IP, but two domains, and I want a domain going to each of the servers. I've understood that I need a Reverse Proxy, and I enabled both the mod_proxy and the mod_proxy_http modules on the "primary server". Do I need to enable anything on the "secondary server"? I also understood that I need to write some things in a virtual host file, but what? On the primary server, I have a virtual host file for one of the domains, and some for subdomains. I want domain1.tld to go to the primary server (port 80 is forwarded to it, so that works) and domain2.tld to go to the other server (internal ip 192.168.0.x). No ports needs to be forwarded to it, right? So, what to add and in which virtual host file? Or a new one? Other questions suggest adding ProxyPass and ProxyPassReverse, but I'm lost anyway, and I just don't understand the apache documentation. Thanks in advance

    Read the article

  • Problem Uninstalling Microsoft Internet Explorer

    - by Roger F. Gay
    On Windows Vista Home Premium (x64) I am trying to uninstall Microsoft Internet Explorer. The procedures explained all over the web involve going through the control panel to Programs and Features. If MSIE is listed there, then uninstall in the usual way. If it is not listed there, click Turn Windows features on and off and deactivate it there. But Internet Explorer is not listed in either place. Background: I initiated some process in MSIE a couple of months ago that caused all web pages to no longer save login information or remain logged in when requested. As you can tell from the way I described that, I don't remember what it was and have no way to simply reverse it. I had a few problems with .NET Framework as well. So, I've uninstalled all browsers except MSIE and uninstalled .NET Framework. I've reinstalled .NET Framework and all other browsers. I have not been able to uninstall MSIE. Have Tried: I tried installing over the existing installation, but auto-update must be keeping it nicely up to date. The attempt simply produced an information window telling me that my current version is more up-to-date than the new version I tried to install.

    Read the article

  • Nginx - Serve blank page on "Bad Gateway" error

    - by TheLittleCheeseburger
    Hello all. I want to use Nginx as a simple reverse proxy, but if the server behind Nginx is down I just was to display a blank page. For some reason this configuration isn't displaying a blank page on error 502 and I can't figure out why. Thanks for your help! user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; # multi_accept on; } http { keepalive_timeout 65; proxy_read_timeout 200; upstream tornado { server 127.0.0.1:8001; } server { listen 80; server_name www.something.com; location / { error_page 502 = @blank; proxy_pass http://tornado; } location @blank { index index.html; root /web/blank; } } }

    Read the article

  • What are the IR codes the new Apple Remote (alu) uses?

    - by index
    I would like to clone the new Apple Remote (infrared, second generation, aluminium) just for fun with a microcontroller. Most codes of the previous model can be found in the LIRC remote control database (all except the key combinations menu + <<,play, which unpair, change ID, pair the remote. I also don't know which bit encodes the battery status. It uses a modified 32 bit NEC protocol (reverse LIRC codes bytewise). But the new Apple remote uses two additional codes for the play and the new select button. I don't have a mac, so I can't brute force test codes either ;-) So if someone possesses such a remote and the ability of recording those two new buttons and three combinations I'd really appreciate it. If you can't run LIRC (or it gets confused by the new codes) and you don't have an oscilloscope or logic analyser, maybe you could hook up a photo diode to your sound input and record the codes with Audacity? Just hit record, hit each button and combo a few times, hit stop, upload the uncompressed WAV file to a sharing site, done. That'd be great!

    Read the article

  • Changing a set-cookie header using mod_rewrite/mod_proxy

    - by olrehm
    I have a bunch of cgi scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my cgi-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule. Is it possible to use mod_rewrite/mod_proxy or something similar, to change the set-cookie header in the response coming from my cgi script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around. I have searched the web and found how to add a set-cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the set-cookie header received in the response of a script I am proxying for. Is that possible? How would I do that? Ole

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Synergy: Cannot send media keys from Linux to Mac

    - by CraftyThumber
    I have a Linux Synergy server (Si-Linux) serving just one Mac client (Macbook Pro UK) (SiBook-Pro.local). On my Linux server I am using a USB Apple keyboard with the exact layout of the laptops keyboard (the compact UK aluminium keyboard). I would like to send the media keys to the Mac client at all times and I have attempted the following in my synergy.conf: keystroke(AudioPlay) = keystroke(AudioPlay,SiBook-Pro.local) This did not seem to work so I ran both the server and client as foreground processes and with debugging enabled and observed the following: Server Log: DEBUG1: activate actions DEBUG1: hotkey: keyDown(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyDown id=57523 mask=0x0000 button=0x0000 DEBUG1: send key down to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 DEBUG1: deactivate actions DEBUG1: hotkey: keyUp(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyUp id=57523 mask=0x0000 button=0x0000 DEBUG1: send key up to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 Client Log: DEBUG1: recv key down id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: mapKey e0b3 (57523) with mask 0000, start state: 0000 DEBUG1: key e0b3 is not on keyboard DEBUG1: recv key up id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: recv enter, 1279,386 5 2000 As you can see, the client claims the key received is not on keyboard. I don't understand since it is the same key as is on the Macbook's keyboard. I tried to reverse the client-server config to see if I could capture the key being sent if I pressed the Play button on the Macbook but the key doesn't seem to even make it to Synergy. Almost all keyboard presses get logged but the media keys seem to bypass the logs and just execute their function locally. E.g. I press play on the Macbook (with the Macbook as the server) and the key plays music on the Macbook and the key is not logged to the debug log.

    Read the article

  • How to prevent asymmetric routing with multiple eBGP routers?

    - by Andy Shinn
    I have 2 routers announcing a /22 subnet to different providers (one providers connects to each of the 2 routers). I have split the /22 in two /23 to announce one /23 on each of the routers plus the /22 (the providers will take the more specific route). This allows me to fail over and keep traffic inside the /23 in and out the same provider. What are other ways in which I could announce just the /22 with both routers and have packets from servers on the network behind the routers go back out the same router in which they came in from? EDIT: The main problem I come across, which end users and clients complain about the most, is that the least hop route is sometimes not the "optimal" route. In my case, I know that Provider B may have better latency to X nation. But when packets come in from provider B, they may go out Provider A or provider B. The reverse is also true. If I send a packet to X nation out provider A, even though it may have more hops back, the packet will likely come in from Provider B (which may have higher latency, packet loss, etc. to this nation)

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • iTunes Home Sharing only works one way between 2 Windows XP PC's on the same LAN

    - by scunliffe
    Both PC's have the latest iTunes installed. PC (A) can "see" that there is a shared library "B library" but attempts to connect to it return this error message: The shared library "{Username}'s Library" is not responding (-3259) Check that any firewall software running on either the shared computer or this computer has been set to allow communication on port 3689. however the reverse works fine. e.g. PC (B) can "see" shared library "A library" and can access all content. Notes: Both PC's have Home Sharing enabled (turned off/on several times to verify). Both PC's have Windows Firewall turned on, but in the exceptions tab, iTunes is allowed, and Port 3689 is also added as a firewall exception (just in case) Both iTunes accounts have been "authorized" on both PC's Both PC's connect via LAN via D-Link DIR-615 router. In the advanced application rules, iTunes has also been added to allow traffic on port 3689 un-hindered. Is there any other magical setting/configuration option that I should be aware of and set in order to get this to work? I could care less about sharing apps etc. I just want the music sharing to work. Update: Solved! It turns out on PC (B) there were multiple accounts set up. 1 of the accounts had the checkbox checked under the Windows firewall "On" option which states "No exceptions" thus even though it was added to the exception list on the main user account, this other account was blocking access.

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • Sending mails via Mutt and Gmail: Duplicates

    - by Chris
    I'm trying to setup mutt wiht gmail for the first time. It seems to work pretty well, however when I send a mail from Mutt i appears twice in Gmail's sent folder. (I assume it's also sent twice - I'm trying to validate that) My configuration (Stripped of coloring): # A basic .muttrc for use with Gmail # Change the following six lines to match your Gmail account details set imap_user = "XX" set smtp_url = "[email protected]@smtp.gmail.com:587/" set from = "XX" set realname = "XX" # Change the following line to a different editor you prefer. set editor = "vim" # Basic config, you can leave this as is set folder = "imaps://imap.gmail.com:993" set spoolfile = "+INBOX" set imap_check_subscribed set hostname = gmail.com set mail_check = 120 set timeout = 300 set imap_keepalive = 300 set postponed = "+[Gmail]/Drafts" set record = "+[Gmail]/Sent Mail" set header_cache=~/.mutt/cache/headers set message_cachedir=~/.mutt/cache/bodies set certificate_file=~/.mutt/certificates set move = no set include set sort = 'threads' set sort_aux = 'reverse-last-date-received' set auto_tag = yes hdr_order Date From To Cc auto_view text/html bind editor <Tab> complete-query bind editor ^T complete bind editor <space> noop # Gmail-style keyboard shortcuts macro index,pager y "<enter-command>unset trash\n <delete-message>" "Gmail archive message" macro index,pager d "<enter-command>set trash=\"imaps://imap.googlemail.com/[Gmail]/Bin\"\n <delete-message>" "Gmail delete message" macro index,pager gl "<change-folder>" macro index,pager gi "<change-folder>=INBOX<enter>" "Go to inbox" macro index,pager ga "<change-folder>=[Gmail]/All Mail<enter>" "Go to all mail" macro index,pager gs "<change-folder>=[Gmail]/Starred<enter>" "Go to starred messages" macro index,pager gd "<change-folder>=[Gmail]/Drafts<enter>" "Go to drafts" macro index,pager gt "<change-folder>=[Gmail]/Sent Mail<enter>" "Go to sent mail" #Don't prompt on exit set quit=yes ## ================= #Color definitions ## ================= set pgp_autosign

    Read the article

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >