Search Results

Search found 41497 results on 1660 pages for 'fault'.

Page 129/1660 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Reverse ssh tunneling with Tomato

    - by Deivuh
    Since my ISP restricts some incoming connections, I can't access remotely to my home pc. What I'm trying to do is make a reverse ssh connection from my home's router with Tomato firmware to the office computer, so I can access remotely from the office with that open connection. What I'm doing is running the following from the router: ssh -R 12345:localhost:22 oUser@office Then I leave run top open to keep the connection alive. And from my office what I do is run the following: ssh hUser@localhost -p 12345 but I get the following message with verbose on: OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to localhost [::1] port 19999. debug1: Connection established. debug1: identity file /home/oUser/.ssh/id_rsa type -1 debug1: identity file /home/oUer/.ssh/id_rsa-cert type -1 debug1: identity file /home/oUser/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/oUser/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host I've password remote access enabled in Tomato's configuration, so I should be able to access without having the public key on *authorized_keys*, but I've even tried adding it and still the same. I've done the same with my home's computer, and it does work perfectly, but it doesn't with the router. Am I doing something wrong? Thanks in advance.

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

  • Can't upgrade Windows Server 2012 Essentials to Windows Server 2012 R2 Essentials

    - by Magnus
    So today was RTM of Windows Server 2012 R2 Essentials released. I immediately tried to do an in-place upgrade of an existing RTM 2012 Essentials to RTM 2012 R2 Essentials, but get: Windows Server 2012 Essentials cannot be upgraded to Windows Server 2012 R2 Essentials. You can choose to install a new copy of Windows Server 2012 R2 Essentials instead, but this is different from an upgrade, and does not keep your files, settings, and programs. You’ll need to reinstall any programs using the original installation discs or files. To save your files before installing Windows, back them up to an external location such as a CD, DVD, or external hard drive. To install a new copy of Windows Server 2012 R2 Essentials, click the Back button in the upper left-hand corner, and select “Custom (advanced)”. This seems a bit odd, is this really the case? I can't find anything in the official documentation stating that this upgrade isn't possible.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • IIS application pool failed to respond to ping

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I have been investigating a problem that occurred on a Windows 2003 server a few days ago. there are about 15 app pools, and within a few minutes, they all produced the error below in the system log: A process serving application pool 'Pool 31x' failed to respond to a ping. The process id was '7144'. The pools were then restarted automatically, but timed out during startup, leaving all sites down. My question is: what would cause a "ping timeout" to all of the app pools around the same time, and then why would they start up too slowly? The app in each pool is a WCMS which uses the .NET 1.1 framework. It connects to a remote DB but is otherwise independent of other machines.

    Read the article

  • URL Rewriting on GoDaddy Virtual Server

    - by Aristotle
    I migrated a Kohana2 application from a shared-hosting environment over to a virtual dedicated server. After this migration, I can't seem to get my .htaccess file working again. I apologize up front, but over the years I have never experienced so much frustration with anything else as I do with the dreaded .htaccess file. Presently I have my project installed immediately within a directory in my public folder: /var/html/www/info.php (general information about server) /var/html/www/logo.jpg (some flat file) /var/html/www/somesite.com/[kohana site exists here] So my .htaccess file is within that directory, and has the following contents: # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /somesite.com/ # Protect application and system files from being viewed # This is only necessary when these files are inside the webserver document root RewriteRule ^(application|modules|system) - [R=404,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] # Alternativly, if the rewrite rule above does not work try this instead: #RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] This doesn't work. The initial controller is loaded, since index.php is called up implicitly when nothing else is in the url. But if I try to load up some other non-default controller, the site fails. If I place the index.php back within the url, the call to other controllers works just fine. I'm really at my wits end, and would appreciate some direction here.

    Read the article

  • What backup utilities are available for Trixbox CE?

    - by Tim Post
    I'm running Trixbox CE (v2.6.2.3) and I need to make a full system backup of all configurations, recordings, databases, etc in a manner that can be easily restored on a brand new installation of the same version of Trixbox CE. I started to write something myself using rsync, however I feel like I'm probably missing some things and needlessly copying other things. What are my options?

    Read the article

  • apache2 mod_deflate static content

    - by rizen
    I have a server serving up a JS file a few million times a day using apache2. Some of my users would like the JS to be gzipped. Does anyone know how apache2 mod_deflate handles compression of static files? Will it compress the js for each request(in which case I'd be worried about cpu load)? If it does, is there a way to pre-compress the JS files so apache2 wouldn't have to do this for each file?

    Read the article

  • Setting up a network e-mail server

    - by Jason
    Hello, My boss just asked me to buy a new server for our office network. I know next to nothing about servers and networking, so I need someone to point me in the right direction. He said he wants this to be our e-mail server with a network login. I have no idea how to set up an e-mail server, especially one that sends/receives e-mail using our domain name. We use a terrible piece of order/inventory software called Mail Order Manager (MOM). Our computers currently connect to the MOM database through a networked drive. My boss would like to move away from this peer-to-peer MOM setup. The software publisher offers a SQL version of MOM, but it's way overpriced. Is there a better way to connect to these databases without using the SQL version? Finally, the server needs to be running Windows. Does this question make sense, is it possible, and can someone help me get started? Thanks!

    Read the article

  • Sendmail not working from local PHP when headers not specified

    - by Dan
    Hi. I'm having trouble getting my local XAMPP server to send emails via my remote SMTP server. In PHP, if I put: $headers = "From: [email protected]\r\n"; mail('[email protected]', 'test', '', $headers); Then this works. However, if I don't specify the header, ie.: mail('[email protected]', 'test', ''); Then this fails. The sendmail.log file says: smtpstatus=554 smtpmsg='554 Message refused.' errormsg='the server did not accept the mail' exitcode=EX_UNAVAILABLE I've tried changing my sendmail command in my php.ini to: sendmail_path = "C:/xampp/sendmail/sendmail.exe -t -f [email protected]" but this doesn't work either. Thanks for any help with this, Dan. ps. this is on windows.

    Read the article

  • Determine nginx reverse-proxy load limits

    - by Aaron
    Hi all: I have an nginx server (CentOS 5.3, linux) that I'm using as a reverse-proxy load-balancer in front of 8 ruby on rails application servers. As our load on these servers increases, I'm beginning to wonder at what point will the nginx server become a bottleneck? The CPUs are hardly used, but that's to be expected. The memory seems to be fine. No IO to speak of. So is my only limitation bandwidth on the NICs? Currently, according to some cacti graphs, the server is hitting around 700Kbps ( 5 min average ) on each NIC during high load. I would think this is still pretty low. Or, will the limit be in sockets or some other resource in the operating system? Thanks for any thoughts and insights. Aaron

    Read the article

  • openvpn TCP/UDP slow SSH/SMB performance

    - by Petr Latal
    I have question about strange behavior of my openVPN configuration on Debian lenny. I have 2 server configs (one proto tcp-server based and one proto udp based). ISP bandwidth is 7Mbit/7Mbit. When I uses proto tcp-server my download server rate is fine around 6,4 Mbit/s, but upload rate is about 3Mbit/s. When I uses proto udp, my download server rate is around 3Mbit/s and upload rate around 6,4Mbit/s. I tried to handle the MTU, MSSFIX and cipher on/off on server and client configs to synchronize rates, but without solution. Here is TCP based SERVER config: mode server tls-server port 1194 proto tcp-server dev tap0 ifconfig 11.10.15.1 255.255.255.0 ifconfig-pool 11.10.15.2 11.10.15.20 255.255.255.0 push "route 192.168.1.0 255.255.255.0" push "dhcp-option DNS 192.168.1.200" push "route-gateway 11.10.15.1" push "dhcp-option WINS 192.168.1.200" route-up /etc/openvpn/routeup.sh duplicate-cn ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key mssfix cipher BF-CBC Here is UDP based SERVER config: port 1194 proto udp dev tun0 local xx.xx.xx.xx server 11.10.15.0 255.255.255.0 ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 duplicate-cn script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key tun-mtu 1500 mssfix 1212 client-to-client ifconfig-pool-persist ipp.txt Here is TCP/UDP based windows CLIENT config: remote xx.xx.xx.xx --socket-flags TCP_NODELAY tls-client port 1194 proto tcp-client #proto udp dev tap #dev tun pull ca ca.crt cert latis.crt key latis.key mute 0 comp-lzo adaptive verb 3 resolv-retry infinite nobind persist-key auth-user-pass auth-nocache script-security 2 mssfix cipher BF-CBC

    Read the article

  • Route additional network through Sonicwall site-to-site VPN

    - by Brandon
    I have a sonicwall site to site vpn. At one of the sites there is another Cisco vpn to another site. I need to route the traffic for the cisco vpn through the site to site from the other sonicwall site. Site A - 10.10.0.0 /16 network Site B - 192.168.1.0 /24 Sonicwall, A cisco vpn is on 192.168.1.226 address and has routes the 10.10.0.0 network to Site A. Site C - 192.168.2.0 /24 Sonicwall Site A-B VPN is working Site B-C VPN is working I need to get Site C to transmit the 10.10.0.0 traffic over the VPN to site B and then out the Cisco device.

    Read the article

  • 5.5.0 smtp;554 transaction failed spam message not queued

    - by Miguel
    Some users are trying to send email to certain domains using Exchange Server 2003, but the message is always is rejected and the following message is shown: 5.5.0 smtp;554 Transaction Failed Spam Message not queued The IP is not in a black list (checked using http://whatismyipaddress.com/blacklist-check and is clean - not listed). The emails were checked using using smtpdiag ("a troubleshooting tool designed to work directly on a Windows server with IIS/SMTP service enabled or with Exchange Server installed") and the connection using port 25 is ok. Also, an nslookup with set type=ptr shows (names and IP changed, "" means I typed something): C:\Documents and Settings\administrator>nslookup Default Server: publicdns.isp.net Address: 10.10.10.10 > server publicdns.isp.net Default Server: publicdns.isp.net Address: 10.10.10.10 > set type=ptr >mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 mydomain.com primary name server = publicdns.isp.net responsible mail addr = root.isp.net serial = 2011061301 refresh = 10800 (3 hours) retry = 3600 (1 hour) expire = 604800 (7 days) default TTL = 86400 (1 day) > 20.21.22.23 Server: publicdns.isp.net Address: 10.10.10.10 23.22.21.20.in-addr.arpa name = mail.mydomain.com 20.21.in-addr.arpa nameserver = publicdns.isp.net 20.21.in-addr.arpa nameserver = publicdns2.isp.net publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 Server: publicdns.isp.net Address: 10.10.10.10 23.22.21.20.in-addr.arpa name = mail.mydomain.com 20.21.in-addr.arpa nameserver = publicdns.isp.net 20.21.in-addr.arpa nameserver = publicdns2.isp.net publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 > set type=mx > mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 mydomain.com MX preference = 10, mail exchanger = mail.mydomain.com mydomain.com nameserver = publicdns.isp.net mydomain.com nameserver = publicdns2.isp.net mail.mydomain.com internet address = 20.21.22.23 publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 > set type=a > mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 Nombre: mydomain.com Address: 20.21.22.23 When I test the spf record with http://www.mxtoolbox.com it shows: TXT mydomain.com 24 hrs v=spf1 a mx ptr ip4:20.21.22.23 mx:mail.mydomain.com -all Any clues of what's happening here?

    Read the article

  • Sending Adobe PDF attachments from Adobe Reader (in Outlook 2003) takes too long

    - by White Island
    I have a customer who is using Outlook 2003 (Microsoft Online Services) and Adobe reader 9+. When they send a PDF from Adobe reader to Outlook (via the Send as attachment to e-mail feature in Adobe), it freezes for 30 seconds to 5 minutes before the new e-mail pops up with the PDF attachment. I'm pretty sure the issue is on the Outlook side of things, as I've tried Adobe reader 8 and Foxit Reader with the same results (Windows XP/7 doesn't seem to make a difference, either). I tried Outlook in safe mode on the first (Win7) machine I was working on, and the e-mail attachment worked a lot faster, but when I tried to replicate the results on another machine, one wouldn't go into safe mode, the other didn't seem to show a difference. In an effort to fix the problem in Outlook normal mode, I tried disabling all add-ins, Com add-in (Office Communicator is the only one), reading pane, Word 2003 as e-mail editor... but none of these seemed to address the issue. Does anyone have any other ideas? I need to get this resolved as soon as possible, and it doesn't seem practical to make them run in safe mode. :P

    Read the article

  • Implement QoS/Bandwidth Management or Upgrade Bandwidth?

    - by Michael
    A question that I'm faced with currently. Here's my setup: Cisco ASA 5510 15Mbps Internet Connection @ $1350/month The bandwidth was originally meant for 35-45 people but we've grown quite quickly to roughly 60-65 people. Needless to say, when I check bandwidth logs it's almost always spiked at 15Mbps. I did use Wireshark to do some poking around to see what was hogging up our bandwidth but with everything running through CDNs and Cloud Services it proved difficult to get a good grasp of where our bandwidth was going. So the question is do I ONLY implement bandwidth management through ASA OR upgrade the Internet to 50Mbps ($1600/month) and then implement bandwidth management through ASA? Any suggestions on how to segment the 15Mbps connection if we decided ONLY to go with the bandwidth management solution? Thanks. UPDATE 1 Installed PRTG and used packet content to monitor the traffic. As I suspected still pretty vague. My Top Connections include the following: a204-2-160-16.deploy.akamaitechnologies.com ec2-50-16-212-159.compute-1.amazonaws.com a204-2-160-48.deploy.akamaitechnologies.com a72-247-247-133.deploy.akamaitechnologies.com mediaserver-sv5-t1-1.pandora.com Other than the Pandora destination, the rest doesn't tell me much on how to properly control the bandwidth. Any thoughts or suggestions? Thanks. M

    Read the article

  • setup Zyxel USG 20W as L2TP VPN Server

    - by Massimo
    I've a Zywall USG 20W (wireless disabled) behind a router supplied by the ISP. All ports (both TCP and UDP) on the ISP router are forwarded to the 20W. I'm trying to configure an L2TP VPN to be used by Windows Xp / 7 with Microsoft native client. This was working before with a different firewall, so I'm pretty sure that all the required packets are flowing to the 20W. I followed a tutorial from the italian Zyxel Website, but I cannot get the VPN to work. Always cannot pass phase 2, and I see the following on the log: [ID]: Tunnel [Default_L2TP_VPN_Connection] Phase 2 local policy mismatch Phase 1 goes fine. In Windows the error is always 788. This happens regardless the proposals I set in the phase 1 and 2 setting. What should I check ? Is there any way to get more detailed diagnostic info (policy mismatch is too generic) ? Thanks a lot to whom may help. Massimo.

    Read the article

  • Change EXT3 stride and stripe-width settings post-install on CentOS 5.3

    - by Justin Ellison
    Is there a way to change the stride and stripe-width options on an ext3 file system under CentOS/RHEL 5.3? There's no way to specify it via anaconda during installation that I saw, and while I see the -E option to tune2fs available under Ubuntu, I don't see it in the manpage on CentOS. I did try to use the -E flag on CentOS and it rejects the flag as unknown if I try to use it. Anyone have any way to do this short of reinstallation?

    Read the article

  • PHP on Server 2003: Enabling PHP_APC.DLL creates Error

    - by james.527.
    When trying to enable PHP_APC.DLL, PHP crashes returning invalid access to memory location. Any ideas? I am running 5.2.12 in ISAPI mode. Here is what I have enabled extension wise: ;extension=php_bz2.dll ;extension=php_curl.dll ;extension=php_dba.dll ;extension=php_dbase.dll ;extension=php_exif.dll ;extension=php_fdf.dll extension=php_gd2.dll ;extension=php_gettext.dll ;extension=php_gmp.dll ;extension=php_ifx.dll ;extension=php_imap.dll ;extension=php_interbase.dll extension=php_ldap.dll ;extension=php_mbstring.dll extension=php_mcrypt.dll ;extension=php_mhash.dll ;extension=php_mime_magic.dll ;extension=php_ming.dll ;extension=php_msql.dll ;extension=php_mssql.dll extension=php_mysql.dll extension=php_mysqli.dll ;extension=php_oci8.dll ;extension=php_openssl.dll extension=php_pdo.dll ;extension=php_pdo_firebird.dll ;extension=php_pdo_mssql.dll extension=php_pdo_mysql.dll ;extension=php_pdo_oci.dll ;extension=php_pdo_oci8.dll ;extension=php_pdo_odbc.dll ;extension=php_pdo_pgsql.dll extension=php_pdo_sqlite.dll ;extension=php_pgsql.dll ;extension=php_pspell.dll ;extension=php_shmop.dll ;extension=php_snmp.dll ;extension=php_soap.dll ;extension=php_sockets.dll extension=php_sqlite.dll ;extension=php_sybase_ct.dll ;extension=php_tidy.dll ;extension=php_xmlrpc.dll ;extension=php_xsl.dll extension=php_zip.dll extension=php_win32service.dll ultimately i'm trying to create a PHP based uploader with a progress bar. From what I understand PHP_APC is the only way to do this. If anyone could help me stabilize PHP_APC or know of another UL with progress bar method, it would be appreciated.

    Read the article

  • Route specific HTTP requests through pfSense OpenVPN

    - by DennisQ
    Hi, to start, I have very little knowledge on routes, iptables, etc. That said, here's what I'm trying to accomplish and where I think I'm stumped: Problem: We have an external website which we recently firewalled so it only accepts traffic from our office IP addresses. This works well at the office, but doesn't work for remote access through VPN as we don't route all traffic through OpenVPN. I would rather avoid forcing everyone to route all traffic through just to accommodate this one site. Environment: Main router box is running pfSense. Em0 is internal IP, Em1 is external. Internal net is 10.23.x and VPN is 10.0.8.0/24 I believe what I need to do is add a route to the VPN server config to send all traffic to that IP over the VPN tunnel. I think that part's working, but I don't get a response back, so I'm assuming that I need some NAT config on the VPN server to route the response back over the tunnel? What I've found so far is to try the following, but since this is a pfSense box on FreeBSD, I can't run iptables, etc. Make sure ip forwarding is enabled: echo 1 /proc/sys/net/ipv4/ip_forward Setup NAT back out: iptables -t nat -A POSTROUTING -s 10.0.8.0/24 -o em0 -j MASQUERADE Am I on the right path, and if so how do I accomplish this through pfSense UI or FreeBSD CLI? Thanks!

    Read the article

  • Force Capistrano to ask for password

    - by Moshe Katz
    I am deploying using Capistrano to a new server and having the following issue. Currently, I cannot add an SSH key to the server to log in with so I must use password authentication. However, I do have a key for another server saved in my local user account's .ssh directory. Here is the error I get when I try to log in: C:\Web\CampMaRabu>cap deploy:setup * executing `deploy:setup' * executing "mkdir -p /home2/webapp1 /home2/webapp1/releases /home2/webapp1/shared /home2/webapp1/shared/system /home2/webapp1/shared/log /home2/webapp1/shared/pids" servers: ["myserver.example.com"] connection failed for: myserver.example.com (OpenSSL::PKey::PKeyError: not a public key "C:/Users/MyAccount/.ssh/id_rsa.pub") How can I get Capistrano to ignore the existence of the key I have and let me log in with a password instead? I tried adding set :password, "myp@ssw0rd" to deploy.rb and it didn't help.

    Read the article

  • Ubuntu 10.04 preseed unattended install results in faulty partition table

    - by joschi
    I'm currently trying to set up an unattended installation of Ubuntu 10.04 (Lucid Lynx) through preseeding. But whenever I try to create a custom partition scheme, the Debian installer (which Ubuntu is using) produces a faulty partition table. I've taken the partition scheme described in the example preseed file: d-i partman-auto/expert_recipe string \ boot-root :: \ 40 50 100 ext3 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext3 } \ mountpoint{ /boot } \ . \ 500 10000 1000000000 ext3 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext3 } \ mountpoint{ / } \ . \ 64 512 300% linux-swap \ method{ swap } format{ } \ . Unfortunately it also produces an incorrect partition table on the disk. The installation process itself is working and the installed system eventually boots and is working, as far as I can tell. But fdisk and cfdisk are still complaining: # fdisk -l /dev/sda Disk /dev/sda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000a1cdd Device Boot Start End Blocks Id System /dev/sda1 * 1 5 37888 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 5 2089 16736257 5 Extended /dev/sda5 5 2013 16121856 83 Linux /dev/sda6 2013 2089 613376 82 Linux swap / Solaris cfdisk even refuses to start at all: # cfdisk /dev/sda FATAL ERROR: Bad primary partition 1: Partition ends in the final partial cylinder parted on the other hand does not complain about the cylinder boundary of /dev/sda1: # parted /dev/sda p Model: VMware Virtual disk (scsi) Disk /dev/sda: 17.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 39.8MB 38.8MB primary ext4 boot 2 40.9MB 17.2GB 17.1GB extended 5 40.9MB 16.5GB 16.5GB logical ext4 6 16.6GB 17.2GB 628MB logical linux-swap(v1) Since the installed system is working, it shouldn't be a big problem but I'm afraid that this will mean trouble in the future.

    Read the article

  • postfix Mail filters not running behind a controlled enviornment

    - by Ashish
    Hi, I have deployed a postfix server for email receiving. On this I have configured SenderID + SPF milter, by referring to http://www.postfix.org/MILTER_README.html The command that I used is as follows: ./sid-filter -u postfix -p inet:10027@localhost -l Following are my settings in main.cf file: #Milter support for smtpd mail smtpd_milters = inet:localhost:10027, inet:localhost:10028 # Milters for non-SMTP mail. non_smtpd_milters = inet:localhost:10027, inet:localhost:10028 milter_default_action = reject # Postfix . 2.6 #milter_protocol = 6 # 2.3 . Postfix . 2.5 milter_protocol = 2 Now I have this observation: One of the postfix that is setup on AWS CentOS 5.5 is working fine and is able to receive mails on defined mx record. One of the similar postfix(as in step 1) that is setup behind one of the corporate firewalls is not able to receive any mails and is giving following type of error logs: connect from xxxxxx.austin.hp.com[xx.xxx.96.198] May 25 13:20:02 g2t0385g postfix/smtpd[11733]: C11F9B0194: client=xxxxxxx.austin.hp.com[15.217.96.198] May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: message-id= May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: milter-reject: END-OF-MESSAGE from xxxxxx.austin.hp.com[xx.xxx.96.198]: 5.7.1 Command rejected; from= to= proto=ESMTP helo= Here the 'sid-filter' is giving problems. Any idea, what I am doing wrong? Please help. Thanks in advance Ashish Sharma

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >