Search Results

Search found 12705 results on 509 pages for 'ip routing'.

Page 458/509 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Configuring a MySQL 5.1 Instance on Windows 7 Professional x64 Fails

    - by Thomas Owens
    I'm trying to set up my laptops to function as mobile development environments. Installing the software on my Linux machine and getting it configured was fairly straightforward, however I'm having trouble getting MySQL 5.1 Server installed and configured on Windows 7 Professional 64-bit. I'm currently using the Windows MSI Installer for the complete MySQL 5.1 system (as opposed to the Essentials installer also available). I've tried to install using both the 32-bit and 64-bit versions of MySQL 5.1 - the same events occur in both. I've installed both the Server Instance Configuration Wizard and Workbench and everything appears to be installed just fine. When I open the Instance Configuration Wizard, I select Detailed Configuration. On the next screen, I select Development Environment, then Multifunctional Database on the next screen. I leave the InnoDB settings unchanged. I select Manual Setting with 5 concurrent connections. I enable TCP/IP Networking on Port 3306 and Enable Strict Mode. I select the Standard Character Set. I check the boxes for Install as a Windows Service (and provide the name "MySQL") and Include the Bin Directory in Windows PATH. On the next screen, I set my root user name and password. I do not enable root access from remote machines and I also do not create an anonymous account. On the final screen of the wizard, when I click "Execute", the first two tasks (Prepare Configuration and Write Configuration File) complete. However, when it reaches Start Service, the wizard hangs and becomes unresponsive ("Not Responding" appears in the title bar and Task Manager). I would really like to be able to use both my Windows and Linux laptops as full-blown mobile development environments, but I can't do that without being able to run MySQL. Has anyone encountered this problem before? What options do I have to correct it?

    Read the article

  • Nginx configuration leads to endless redirect loop

    - by brianthecoder
    So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2. Here's the ssl config server { listen 443 default ssl; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; ssl_certificate /home/app/ssl/<redacted>.com.pem; ssl_certificate_key /home/app/ssl/<redacted>.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Url-Scheme $scheme; proxy_redirect off; proxy_max_temp_file_size 0; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Here's the non-ssl config server { listen 80; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Let me know if there's any additional info I can give to help diagnose the issue.

    Read the article

  • Unable to connect to server after a certain amount of time

    - by Troy
    I am a business FIOS subscriber with 5 static IPs. I have the following network setup: Verizon provided ONT Dlink switch Dell server running Ubuntu 12.04 with iptables enabled and a static IP address. The makes/models of hardware are: FIOS ONT Alcatel-Lucent I-211M-H ONT D-Link D-Link Web Smart Switch DES-1228P Server Dell Optiplex 755 (Ubuntu 12.04 Server) I have iptables running on the server with http, https and ssh ports open. I can connect to a website on the server from an external computer, but after a certain amount of time (mins to hours), I can no longer connect. All I have to do to re-enable connectivity is connect to the server via SSH from a computer INSIDE the network. I don't have to actually login, I just have to establish a connection. I can then access the website externally again. I did some googling and it seems some of verizon's equipment had an ARP bug where the ARP entries would expire after a certain time period, but those issues all seem to be from back in 2009 - 2010. I know the switch has an 'auto learning Mac address' feature, but I'm not sure if that could be the problem or not. Does anyone have any ideas or advice on how I can troubleshoot this? Thanks!

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • Postgresql connection refused

    - by Jonathan
    I'm trying to remotely connect to my postgresql database. I have two virtual machines set up both running ubuntu 14.04. I am trying to connect to the second vm using the first vm using psql -h 10.0.1.23 -U postgres -d postgres But I receive the error: Could not connect to server: Connection refused Is the server running on host "10.0.1.23" and accepting TCP/IP connections on port 5432? I have changed the pg_hba.conf and added host all all 10.0.1.64/24 md5 host all all * md5 host all all 0.0.0.0/0 md5 And changed the postgresql.conf listen_address=" * " In an attempt to allow all incoming connections. I have also tried to change the firewall settings, but I am unsure of whether or not the ports are properly listening for the connection. Edit: Output of netstat -an | grep -E '^tcp[^6].*LISTEN' tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:23 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN

    Read the article

  • How to play WAV file through Network Paging Interface

    - by BGM
    In our building we have a Viking Paging ZPI-4 Interface for our intercom. The interface receives data from our Asterisk Phone system via a Cisco SPA112 Port Adapter which has it's own IP address on the network and converts digital into analog. Asterisk plays the "5" tone and then allows the user's voice to commence over the connection. Now, what I want to do is to play a wav file over this Viking Paging device using the Cisco Port Adapter. I know how to get Asterisk to do it, but I want to do this without Asterisk. I want some kind of program that can talk to the Cisco Port Adapter and then transmit the wav file into the Viking Paging Device. What kind of program do I need to get or make? Now, I found this link if it helps anyone with ideas. I also found this information, but I'm not sure how to apply it. I also found this, but it involves an arduino. However, I already have the analog-to-digital convertor, and the Viking will handle sending sound over the paging speakers. I just need to know how to send the wav file to the Viking via the Port Adapter. So far, I know my wav file should be formatted as 8bit mono, and I need to send the "5" tone to open the Viking Pager's channel. [update] I am trying to figure out if I can use VLC player to stream to the ipaddress of the Port Adapter. So far I'm not having success with that, and don't even know if it will work. Windows Media Player has a streaming option too. I am thinking that since the Cisco Port Adapter thinks it is a sort of phone, that the only way this can be done is via SIP.

    Read the article

  • STOP 0x7b booting from iSCSI

    - by Michael
    Hi, I've a Windows 2008 SBS running. It boots of iSCSI. That setup worked for months until yesterday. I intended to reboot and gained a: STOP 0x0000007b INACCESSIBLE_BOOT_DEVICE and no idea why. My setup hasn't changed. No new controller, no new or changed iSCSI targets, no new Network Card or IP address changes. I had all Windows Updates on it. Last known good: same STOP. Allow unsigned drivers: same STOP. Safe mode (all variants): same STOP. Mount target from a client: works. Filesystem check fine. I booted of the SBS DVD but in computer repair options my target doesn't appear. When i choose setup the target appears. So, how can i diagnose what's going wrong? Any helpful tools? Any hints? Thanks in advance Michael

    Read the article

  • Host Name Resolution - ISA 2006 - VPN PPTP

    - by Brian Lee Jackson
    We are running an ISA 2006 server and PPTP VPN connection works fine. Clients are able to connect to internet, access Outlook, CRM, etc. The problem we are encountering is that host name resolution is not working. Example, when connected via VPN I can’t ping any box other than the VPN server by the host name. Nslookup also fails. I can ping everything fine via IP address. But for clients, they need to be able to access their “mapped” drives over the VPN which all are mapped by host name. I recently took over this position and it sounds like this used to work. What would be the best place to check first? I haven’t had much exposure to ISA and have been reading up a bit on installation procedures, etc. DNS is hosted and running on our domain controller, as well as WINS. It isn’t on the ISA box. Is there a firewall policy that perhaps got removed? What usually is required for host name resolution to pass through. Any help would be appreciated, thanks!

    Read the article

  • Restrict Computer or Users from Internet but allow access to intranet and Windows Update / ePO?

    - by MoSiAc
    So this may be impossible but I've been asked to try and find something about it. So far nothing I have found is possible. I need to restrict specific machines or user accounts from regular Internet access but let them have access to the intranet portion of our network. I do not have Active Directory control, nor does anyone at my local workplace (corporate control in a different state). I have tried going through IPsec and doing this per local machine, but that system seems to have been removed from the images that are installed on these machines so that is out. So far the only other option I can think of is assigning the machines a specific ip address and removing their gateway access. This would probably work but the machines need to be able to receive updates that are being pushed to them through ePO and LanDesk. I would really like to do this on the user level because then if I need to do tech work to the machine and need internet access I can get to it but a "special" user could login and not be able to get into anything.

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

  • How do you get AWS VPC EC2 instances to be able to see the AWS APIs?

    - by Peter Mounce
    We're spinning up infrastructure inside of an AWS VPC via CloudFormation. We're using auto-scaling groups to bring up VPC-EC2 instances (so, we don't bring up instances directly; ASGs manage that). Inside of a PVC, EC2 instances only have a private IP; they cannot see the outside world without further work. When these instances spin up, we have some bootstrap tasks that require talking to the various AWS APIs. We also have some ongoing tasks that require AWS API traffic. How are you tackling this apparent chicken-egg problem? We've read about: NAT instances - but don't like this so much because it's another layer to our stack. assigning elastic-IPs to each VPC instance that needs to talk - but a) they all do, and b) since we're using ASGs, we don't know which instances to assign EIPs to at provision-time, and c) we'd need to set up something to monitor those ASGs and assign EIPs when instances are terminated and replaced spinning up an instance (actually, a load-balanced pair, probably spanning AZs) to act as an AWS-API proxy for all API traffic I guess I'm wondering whether there's some kind of back-door we can open that allows our VPC EC2 instances access to the AWS API endpoints, but nothing else, for cheap-complexity setup, that doesn't add another network-hop layer to our infrastructure for serving requests.

    Read the article

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • Nokia E75 Mail for Exchange

    - by Sebastian
    Hi, I have a SBS2003 runing Exchange Server 2003 SP2. My OWA has a godaddy certificate valid for 3 years to come installed. HTTPS works fine for OWA. The certificate has also been copied into the Nokia E95 I am trying to syncronize my Nokia E75 via Mail for Exchange to my mail account on the Exchange server. These are the steps i use: Menu Email New Start Select Internet Gateway Than i enter the details: [email protected] I select company email Mail for Exchange In the domain menu i enter : mydomain In the username/password menu i enter : myusername/mypassword In the server menu i enter : mail.mydomain.com (where the DNS resolves into the server's IP address) In the secure access i select : Internet / Secure / 443 NOTE : port 443 has been opened on my SBOX and forwarded to the exchange server. On IIS default website properties directory security secure communications edit the "Require Secure Channel SSL" is enabled. However, when i try to sync my phone i get the following error code: * Mail for Exch permissions illegal. Check permission configuration. * The phone log gives the following information : Username or Password Illegal. Correct Username and/or Password in the profile options. I've tried speaking with the Phone service support but they cannot identify the problem. Any help will be much apreciated.

    Read the article

  • Using Computer name in URL causes issues when connecting to Web Services

    - by AWinters
    The set of applications I work on all access the same 8 or so web services that we have. These services and applications all reside on the same box and all use the computer name when trying to connect to the web service. For Example: If I have a web service called MapDataService and I have an application that accesses it, it would access it by the URL: http://COMPUTERNAME/MapDataService/MapDataService.asmx. This works in most of the applications that access the web service. However, we have several applications that, when using the computer name in the URL, will not get data returned from the service (actually a 503 is returned). In order to get it to work, the IP address of the system needs to be used in place of the COMPUTERNAME. This strikes me as very odd considering, as I mentioned before, all applications and services are on the same box and most other applications usr the COMPUTERNAME with no issues. Can someone give me some insight as to what could be causing this? We have no access to IIS logs and what logs we did get (this is on a customer site) are not very useful.

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Intermittent FTP login issues (Microsoft IIS FTP Service)

    - by JaggenSWE
    I've got a somewhat weird problem which I'm not sure how to troubleshoot. We have a FTP running on a Windows Server 2003 machine using the IIS FTP Service, this is for our clients and is configured with IP-restrictions. However, now ONE of the clients starts complaining that they can't log in to the server from time to time. This is just ONE of 10+ clients that have this issue, which makes me think it's a problem on their side. Just to be on the safe side I had a peek into the FTP logs and found something strange. Whenever succeed in loggin in this is what I can find in the logs: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191747]USER, userxxx, -, nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 230, 0, [191747]PASS, -, -, However, if the login fails I see the following events: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191739]USER, userxxx, -, nnn.nnn.nnn.70, -, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 530, 1326, [191739]PASS, -, -, When you look at the event where the clients sends the PASS in the successful login it seems to know that it is infact "userxxx" that is coupled to that PASS, but when it fails it seems to be lost since user in the PASS event is set to "-". Anyone have any ideas around this, any help would be appreciated. :) //JaggenSWE

    Read the article

  • Windows 7 caches FTP credentials?

    - by Martin Booka Weser
    On my remote maschine i have an iis 7.5 (win server 2008) and set up an ftp site with iis manager authentication. I then did active directory user isolation and isolated my users to physical folders according to their names. So far, so good. I can access with ftp cliens from everywhere with different test accounts that i previously set up in the iis manager auth. Every user connects to its own folder. When i now tested with windows 7 as a client i did the following. Explorer - computer - right click - add network address - the ip of my remote maschine - user1 - password1 Perfect - it works. I now want to connect with user2. So I deleted this network address and set up a new connection, but with user2 (or even anonymous) instead. Now the strange thing: Windows doesn't even ask me for a password again. It just connects me to the folder of the user1. I already disabled ftp caching in the IIS and i disabled the user1 account in IIS manager authentication! Still, if i set up a network connection with this windows 7 it connects to the folder user1 . No matter which username i use (anonymous, administrator, user2,...). And if i connect with other ftp clients or other computers it all works perfectly. So I assume that this one windows somehow caches the credentials... But then, why does the IIS still accepts this credentials even if i disabled this user1 account??? Thanks.

    Read the article

  • Configure Nginx to render static files and rewrite file extension or proxy_pass

    - by Pardoner
    I've set up Nginx to handle all my static files else proxy_pass to a Node.js server. It's working fine but I'm having difficulty rewriting the url so that it remove the .html file extension. upstream my_upstream { server 127.0.0.1:8000; keepalive 64; } server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.org/public; access_log /var/logs/staging.mysite.org.access.log; error_log /var/logs/staging.mysite.org.error.log; location ~ ^/(images/|javascript/|css/|robots.txt|humans.txt|favicon.ico) { rewrite (.*)\.html $1 permanent; try_files $uri.html $uri/ /index.html; access_log off; expires max; } location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_cache one; proxy_cache_key sfs$request_uri$scheme; proxy_pass http://my_upstream; } }

    Read the article

  • 4.4.1 Timeout in 10 minute intervals SMTP on batch email jobs

    - by TEEKAY
    I am running a job that uses SMTP and it can run in excess of an hour, emailing the entire time. It's not my code but a workflow based app so I just get a form to configure the mail server, subj, msg, etc and can't see it's implementation. I know it is .NET and SmtpClient. I have been seeing 4.4.1 timeouts every 10 minutes being reported by the application as the response from the server. The # of emails in those 10 minute sessions are variable, between 100 and below 150 which leads me to ask about the 10 minute timeout time specifically. I have found there are several exchange properties (though I don't know what version they are running) that set timeout limits. (http://technet.microsoft.com/en-us/library/bb232205%28v=exchg.150%29.aspx) Would those values for ConnectionInactivityTimeOut and ConnectionTimeout be the controlling the timeouts? and finally I would like to ask if exchange considers the consistent connection(s) it kept receiving from the same source as one continuous connection and cause the timeout each 10 minutes and cause the timeout? I am using a static ip of the mail server. Thanks if anyone can shed any light on my problem. EDIT - It is my belief that the library is just keeping the connections around and isn't wrapped in any cleanup code or using statement. That said, I still haven't made any progress on this issue in the last year and just requeue the failed ones as I see them.

    Read the article

  • Can Internet data be used by malware when PC off?

    - by Val
    I have noticed over the last month that my off peak data has been used at a rate of approx 350MB per hour - this has meant that I have gone over my quota and slowed down by my ISP to 256k. There is no one in the house using it (2am-8am is my ISPs off peak hours) at that time. My PC and other wireless devices (ipad and iphone) are turned off. I have changed the wireless password on my modem 3 times and it is now 30 digits long. So I don't think someone else is using my wireless access between 2-8am. It has been suggested by my ISP that I may have malware/spyware on my computer. Sorry for my ignorance, but can malware still run if the PC is off? I did look at my modem's log and followed an IP address to a service called Amazon Simple server Storage. Could this company possibly be the culprit? I am not too tech savvy, so any assistance appreciated. I have run a barrage of spyware cleaning software eg malware bytes; spy bot etc.... Cheers Val

    Read the article

  • What Windows service binds a NIC to the network?

    - by Bigbio2002
    I have a server that takes several minutes for the NIC to bind itself to the network upon startup (it has a statically-configured IP). This causes DNS/WINS/Intersite Messaging to fail to start, since they're dependent on a network connection. While I'm still attempting to find a root cause to this issue (I've done firmware updates, checked for any odd drivers/services, no luck so far), but in the meantime, I want to adjust the load order of services to ensure that the NIC binds first before these services attempt to start. The only question is, which service is it? The server is running Server 2008 R2 and only has one NIC installed. (On a side note, there are two other small but odd problems occuring with the server. The server had the issue described in KB2298620, which I've fixed. The other problem occurs in Windows Server Backup. No events appear in the upper portion of the window, despite the fact that backups are running in the background. Whenever I attempt to modify the backup schedule, it gives me the error "Not enough storage is available to process this command" and appears to fail, when, in fact, it actually succeeds. These may be separate issues, but something tells me that some of these might share a common root cause.)

    Read the article

  • OpenVPN-based VPN server on same system it's "protecting": feasible?

    - by Johnny Utahh
    Scenario: hosted machine (typically a VPS) serving wiki, svn, git, forums, email lists (eg: GNU mailman), Bugzilla (etc) privately to < 20 people. People not on team not allowed access. Seeking VPN-restricted access to said server. Have good user experience with OpenVPN-based servers/clients, but have yet to server-admin such systems. Otherwise, experienced Linux sysadmin. Target system: Ubuntu, probably 12.04. Seeking to put an OpenVPN process on above server to "protect" all the above-mentioned services, enabling only OpenVPN-authorized clients/processes to access above services. (Can easily acquire additional IP address(es) as needed for this setup.) Option: if absolutely needed, can employ an additional, dedicated, "VPN server" VPS simply to be my VPN server "front end." But prefer to have all server processes (VPN server plus other server apps) all running on same machine, if possible. Will consider further if dedicated-VPN-machine setup enables 1. easier installation/administration, 2. better/easier end-user experience, and/or 3. makes system significantly more secure. Any of above feasible? The main intention: create a VPN from purely-hosted resources, and not spend all the effort to make a non-VPN, secure site--which typically means "SSL wrapping" + all the continual webserver-application-update management. Let the VPN server deal with access security, and spend list time pushing said security "down" in the other apps/Apache.

    Read the article

  • SQL Server Subscriber Migration

    - by SuperCoolMoss
    We're currently have one way transaction replication from a SQL Server 2005 OLTP publisher/distrbituor to two subscribers (one SQL 2005 and the other SQL2008 R2). Replication security is via the SQL Agents' domain service account (the same account is used on all boxes). The SQL2008R2 subscriber is used for BI purposes and hosts a database that has a subset of the Production publisher database tables, with different security and indexes. We need to migrate this BI subscriber to a newer box with more performant hardware. The plan is as follows: Stop replicating to the BI box (continue replicating to the other subscriber). Backup all databases on the BI box (including system databases). Restore all databases (including master in single user mode) to the new BI box (this has SQL Server 2008R2 already installed). Take the old BI box off the network and shut it down. Rename and Re-IP the new BI box to be the same as the old box. Switch replication back on. Are there any flaws in this approach?

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >