Search Results

Search found 21644 results on 866 pages for 'connection speed'.

Page 393/866 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Socket options for a tcp server with 3G clients & frequent disconnections

    - by Joel
    I have a TCP server, written in java, sending and receiving many short messages, from 500 bytes to 100 KB long. It's a chess game and chat server, to make it simple. The server is running Debian 6. Half of the clients are connecting from 3G networks, and half over standard DSL. A portion of the 3G clients lose connection pretty often. The error I get on the server and on the client socket is Connection reset. I have come across this page at Oracle documentation: socketOpt. I am wondering what I could tune there to lower the number of disconnections from 3G clients. I don't mind about the ping or transfer rate, but just about the TCP disconnections. I am not skilled enough to understand the impact of each setting, but I sort of understood that the TCP window was important, although I don't know exactly how. So I'm asking if anyone here has an idea ? Thanks if you can help.

    Read the article

  • Mac OS X L2TP VPN won't connect

    - by smokris
    I'm running Mac OS X Server 10.6, providing an L2TP VPN service. The VPN works just fine when connecting from all computers except one --- this one computer stays at the "Connecting..." stage for a while, then says "The L2TP-VPN server did not respond". In the console, I see this: 6/7/10 10:48:07 AM pppd[341] pppd 2.4.2 (Apple version 412.0.10) started by jdoe, uid 503 6/7/10 10:48:07 AM pppd[341] L2TP connecting to server 'foo.bar.baz.edu' (256.256.256.256)... 6/7/10 10:48:07 AM pppd[341] IPSec connection started 6/7/10 10:48:07 AM racoon[342] Connecting. 6/7/10 10:48:07 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 1). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 2). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 3). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 4). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 5). 6/7/10 10:48:11 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:14 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:17 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). ...and the "retransmit" messages continue until the error message pops up. So far I've unsuccessfully tried: rebooting deleting the VPN profile and recreating it verifying the client's internet connection (it is able to reach the VPN server) connecting through several different networks (in case a router was blocking VPN packets) disabling the Mac OS X Firewall on the client making sure that the VPN settings exactly match those of other working computers running software update (the client is on 10.6.3) Any ideas?

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Multiple cable adapter setup not working - VGA to smartphone. All cables tested and work

    - by Christopher Rucinski
    Issue Pictured overhead projector setup does not work. #1 - #2 - #3 - Phone. All cables are tested and work! The issue is the HDMI connection between cable #2 and #3. With all other cables, the screen will automatically be displayed onto the projector screen. No extra work needed. With the pictured setup, the smartphone screen is not displayed onto the projector screen. What is the issue with the HDMI connection?? Background We recently had to do presentations at work (school), but the administration only provided VGA means of hooking up to it. Mostly likely reason probably dealt with cost. Anyways, there are several teachers that have brand new Samsung Series 9 ultrabooks (or similar). You know, the ones without VGA support. So I bought an adapter for those ultrabooks. Cable #5 in the picture below. However, both my coworker and I have been wanting to just display our phone screens on the projector. This I knew would require some extra work. What I have VGA cable to projector (cables go through the wall) For laptops HDMI to VGA cable For laptops MHL adapter For 11-pin microUSB phones microHDMI to VGA cable For ultrabooks 11-pin to 5-pin microUSB adapter For older 5-pin microUSB phones) Equipment Projectors 1 projector with VGA and HDMI input (issue is coworkers forget to switch sources) 1 projector with VGA only input Laptops 2 new Samsung ultrabooks w/o VGA or HDMI support 1 ultrabook with VGA and HDMI support several other laptops with at least VGA support 1 tablet with 11-pin microUSB at least 1 new phone with 11-pin microUSB at least 1 old phone with 5-pin microUSB Tested VGA cable (#1) to laptop Good VGA cable (#1) to HDMI adapter (#2) to laptop Good VGA cable (#1) to microHDMI adapter (#5) to laptop Good Projector to HDMI cable (not shown) to MHL adapter (#3) to Galaxy Note 3 smartphone Good VGA cable (#1) to HDMI adapter (#2) to MHL adapter (#3) to Galaxy Note 3 smartphone Does not work!! Extra Notes The 11-pin MHL adapter will not fit inside the 11-pin to 5-pin microUSB adapter so older phones can be displayed on the screen.

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Linux/Unix MTA with the smartest queue?

    - by threecheeseopera
    I am looking for an MTA that will allow me (a script, really) to proactively manage it's send queue in response to status codes returned by the remote servers I am delivering to. Basically, for each mail sent I would like to be able to react to the SMTP reply code returned by the remote server, ex. '250 OK', or to any error conditions like connection timeouts. Additionally, I would like to be able to manage the send queue moving forward based on this information, e.g. 'example.com has timed out the last 5 connection attempts, so no longer queue mail for recipients @example.com'. I am currently using postfix and perl to parse it's logs for this information, but I am playing a game of catchup that is prone to errors (out-of-order log entries etc.) and it's starting to get messy (some real ugly regexes ;). I really don't want to reinvent the wheel and use some language's smtp library; i would prefer to use a proven/fast/reliable MTA. I am however open to suggestions if what I need just isn't possible. Thanks for your help!

    Read the article

  • Unable to login to Amazon EC2 compute server

    - by MasterGaurav
    I am unable to login to the EC2 server. Here's the log of the connection-attempt: $ ssh -v -i ec2-key-incoleg-x002.pem [email protected] OpenSSH_5.6p1, OpenSSL 0.9.8p 16 Nov 2010 debug1: Reading configuration data /home/gvaish/.ssh/config debug1: Applying options for * debug1: Connecting to ec2-50-16-0-207.compute-1.amazonaws.com [50.16.0.207] port 22. debug1: Connection established. debug1: identity file ec2-key-incoleg-x002.pem type -1 debug1: identity file ec2-key-incoleg-x002.pem-cert type -1 debug1: identity file /home/gvaish/.ssh/id_rsa type -1 debug1: identity file /home/gvaish/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-50-16-0-207.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/gvaish/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-key-incoleg-x002.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: Trying private key: /home/gvaish/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). What can be the possible reason? How do I fix the issue?

    Read the article

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • Ubuntu : apt-get command error

    - by Wibowo Margito
    I work with Ubuntu 10.04 everyday. Several days ago, when I release command sudo apt-get install .... it run very good, no error. I also able to open websites with my browser with no proxy. But, today, I got error. Every time I release the command, the connection redirected to an IP in my local network. I can see it in the terminal window. Several days ago I tried to connect to the internet throught the IP, by SSH tunneling. But I forget what I have done and there is no way home. This is the output in terminal : deo@deo-laptop:~$ sudo apt-get update [sudo] password for deo: Err http://cx.archive.ubuntu.com lucid Release.gpg [ Could not connect to 10.7.7.15:3128 (10.7.7.15). - connect (110: Connection timed out) Err http://cx.archive.ubuntu.com/ubuntu/ lucid/main Translation-en_US Unable to connect to 10.7.7.15:3128: 10.7.7.15 is an adress in my local network. Somebody please help me :)

    Read the article

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Authenticating Active Directory Users to Mac OS X Mavericks Server L2TP VPN Service

    - by dean
    We have a Windows Server 2012 Active Directory Infrastructure that consists of two domain controllers. Bound to the Active Directory Domain is a Mac OS X Mavericks Server 10.9.3. The server runs Profile Manager and VPN Services. My Active Directory users are able to authenticate to the Profile Manager, but not the VPN. I have found several threads on other forums of other users reporting similar issues, here is just one of many references: https://discussions.apple.com/thread/5174619 It appears as though the issue is related to a CHAP authentication failure. Can anyone suggest what next troubleshooting steps I might take? Is there a way to liberalize the authentication mechanism to include MSCHAP? Here is an excerpt of the transaction from the logs. Please note the domain has been changed to example.com. Jun 6 15:25:03 profile-manager.example.com vpnd[10317]: Incoming call... Address given to client = 192.168.55.217 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: publish_entry SCDSet() failed: Success! Jun 6 15:25:03 --- last message repeated 2 times --- Jun 6 15:25:03 profile-manager.example.com pppd[10677]: pppd 2.4.2 (Apple version 727.90.1) started by root, uid 0 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: L2TP incoming call in progress from '108.46.112.181'... Jun 6 15:25:03 profile-manager.example.com racoon[257]: pfkey DELETE received: ESP 192.168.55.12[4500]->108.46.112.181[4500] spi=25137226(0x17f904a) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP connection established. Jun 6 15:25:04 profile-manager kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 0) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connect: ppp0 <--> socket[34:18] Jun 6 15:25:04 profile-manager.example.com pppd[10677]: CHAP peer authentication failed for alex Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connection terminated. Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnecting... Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnected Jun 6 15:25:04 profile-manager.example.com vpnd[10317]: --> Client with address = 192.168.55.217 has hung up

    Read the article

  • What do "Unknown SSAP" and "Unknown DSAP" mean in tcpdump?

    - by lacker
    While trying to fix a problem with intermittently losing internet connection on a machine with a wireless connection to a router, I ran tcpdump and noticed packets with "Unknown SSAP" and "Unknown DSAP" errors coming at a rate of a few per second. 20:27:21.703178 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 171 20:27:21.724726 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 104 20:27:21.746449 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe4 Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:21.970963 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe8 Information, send seq 0, rcv seq 16, Flags [Response], length 76 20:27:22.016565 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:22.038471 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 171 What does the "Unknown SSAP" and "Unknown DSAP" mean, and does it indicate a problem?

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Change which server mailbox is associated with in Exchange 2007

    - by tacos_tacos_tacos
    I have restored and mounted an EDB file onto a new Exchange 2007 Server. However, the old server is still online and although all the mailboxes I need are in the newly-mounted database, in Exchange 2007 System Manager it still shows that the mailbox is associated with the old server. If I try to "Move" the database it actually tries to copy the files from the old server to the new server, which is not necessary because they are already there - and produces and error about the mailbox on the destination already existing. How can I simply tell Exchange (AD?) to use the new server to find the mailbox rather than the old? Edit: I did the restore by taking the old server offline (turning off all Exchange services), copying EDB file to the new server, restoring it with eseutil, and mounting it to the new server. I did it this way in part because I didn't know a better way and in part because I couldn't use move-mailbox as the source location had a horrible Internet connection (which is why Exchange is being moved to the new location). I had to copy the EDB from the old server to a hard disk, go somewhere with a better Internet connection, upload the EDB to the new server.

    Read the article

  • Steps after installing vCenter Server?

    - by goober
    I'm working with: Two new ESX servers that I'm configuring A new Server 2008 R2 machine that I'm using for vCenter. I took the following steps: Installed the Hypervisor on the 2 ESX machines Checked their setup/connectivity (appears to be fine; can ping, etc.) Installed vCenter Server on the Win2k8R2 box. This included the install of a SQL Express database (we're a small shop) FYI, I changed some of the ports (443 -- 8443, 80 --8080, etc.) Installed vCenter Web Client Server on the Win2k8R2 box Problems my vSphere Client on my Desktop fails to connect. Part of this is that it asks me for a username and password, but I don't recall specifying one when I set up the install. I receive the error "vSphere Client could not connect to [machinename]. An unknown connection error occurred. (The request failed because of a connection failure. (Unable to connect to the remote server))" I have also tried to use local machine admin credentials, including the format machinename\localuseracct. I have also tried using my domain credentials which are an admin for that box. I have also checked and the service is running. I also tried to connect via vSphere client locally installed on the server. It translates "localhost" to the correct name but gives the same error. I cannot register the vCenter server from the vCenter Web Client Server. I'm not sure if this is necessary, as they're both on the same machine, but it seems like the logical next step. I also receive a "failed to connect" error in this case as well. FYI, both the vCenter server and the vCenter Web Client Server are installed on the same Win2k8R2 server. What am I missing here? What is the best way to test in this case?

    Read the article

  • How do I SSH tunnel using PuTTY or SecureCRT through gateway/proxy to development server?

    - by DAE51D
    We have some unix boxes setup in a way that to get to the development box via ssh, you have to ssh into a 'user@jumpoff' box first. There is no direct connection allowed on 'dev' via ssh from anywhere but 'jumpoff'. Furthermore, only key exchange is allowed on both servers. And you always login to the development box as 'build@dev'. It's painful to always do that hopping. I know this can be done with SOCKS or a Tunnel or something... I have setup a FreeBSD VM and I can get things to work awesome using unix ssh tools. Basically all I do is make sure my vm's ~/.ssh/id_rsa.pub key is on both jumpoff and dev and use this ~/.ssh/config file: # Development Server Host ext-dev # this must be a resolvable name for "dev" from Jumpoff Hostname 1.2.3.4 User build IdentityFile ~/.ssh/id_rsa # The Jumpoff Server Host ext Hostname 1.1.1.1 User daevid Port 22 IdentityFile ~/.ssh/id_rsa # This must come below all of the above Host ext-* ProxyCommand ssh ext nc $(echo '%h'|cut -d- -f2-) 22 Then I just simply type "ssh ext-dev" and I'm in like Flynn. The problem is I can't get this same thing to work using either PuTTY or SecureCRT -- and to be honest I've not found any tutorials that really walk me through it. I see many on setting up some kind of proxy tunnel for Firefox, but it doesn't seem to be the same concept. I've been messing with various trial and error most all day and nothing has worked (obviously) and I'm at the end of my ssh knowledge and Google searching. I found this link which seemed to be perfect, but it doesn't work for me. The "Master" connects fine, but the "client" portion doesn't connect. It tells me, the remote system refused the connection. http://www.vandyke.com/support/tips/socksproxy.html I've got the VM, PuTTY and SecureCRT all using the same public/private key pairs to make things consistent and easier to debug. Does anyone have a straight up example of how to do this in Windows?

    Read the article

  • Windows 2008 R2 forgets static IP configuration after reboot

    - by Andrew
    I've got an issue where a Windows 2008 R2 Standard (SP1) server loses its static IP configuration upon a reboot. It's a sysprep'd image. The following steps reproduces the problem: Using the SAC, set the IP using 'i' Use the Win32 EnableStatic() method to set an IP (and then SetGateways()) through PowerShell Reboot The machine boots up with the following configuration: Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : [...] Autoconfiguration IPv4 Address. . : 169.254.152.31 (incorrect) Subnet Mask . . . . . . . . . . . : 255.255.0.0 (incorrect, was set to /24) Default Gateway . . . . . . . . . : 1.1.1.1 (correct) Occasionally, the gateway is also incorrect (0.0.0.0) The images have a script that runs 'netsh int ip reset' after sysprep finishes (before the reboot), so it appears that does not solve the issue. (the problem also happens without this step) After the reboot, using 'i' on the SAC resolves the issue permanently. (But I'd like to know the root cause as having to run 'i' again isn't ideal)

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • Apache 403 after configuring varnish

    - by w0rldart
    I just don't know where else to look and what else to do. I keep getting a 403 error on all my vhosts after setting varnish 3.0 Apacher log: [error] [client 127.0.0.1] client denied by server configuration: /etc/apache2/htdocs Headers: http://domain.com/ GET / HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Cookie: __utma=106762181.277908140.1348005089.1354040972.1354058508.6; __utmz=106762181.1348005089.1.1.utmcsr=OTHERDOMAIN.com|utmccn=(referral)|utmcmd=referral|utmcct=/galerias/cocinas Cache-Control: max-age=0 HTTP/1.1 403 Forbidden Vary: Accept-Encoding Content-Encoding: gzip Content-Type: text/html; charset=iso-8859-1 X-Cacheable: YES Content-Length: 223 Accept-Ranges: bytes Date: Sat, 01 Dec 2012 20:35:14 GMT X-Varnish: 1030961813 1030961811 Age: 26 Via: 1.1 varnish Connection: keep-alive X-Cache: HIT ---------------------------------------------------------- /etc/default/varnish: DAEMON_OPTS="-a ip.ip.ip.ip:80 \ -T localhost:6082 \ -f /etc/varnish/main.domain.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" #-s malloc,256m" My vcl file: http://pastebin.com/axJ57kD8 So, any ideas what I could be missing? Update Just so you know, ports: NameVirtualHost *:8000 Listen 8000 and <VirtualHost 205.13.12.12:8000>

    Read the article

  • IPTables: NAT multiple IPs to one public IP

    - by Kaemmelot
    I'm looking for a way how to nat 2 or more inner IPs (in my case xen doms) to one outer IP. I tried to use iptables -t nat -A PREROUTING -d 123.123.123.123 -j DNAT --to 1.2.3.4 --to 1.2.3.7 iptables -t nat -A POSTROUTING -s 1.2.3.4 -j SNAT --to 123.123.123.123 iptables -t nat -A POSTROUTING -s 1.2.3.7 -j SNAT --to 123.123.123.123 And got an error: iptables v1.4.14: DNAT: Multiple --to-destination not supported Try `iptables -h' or 'iptables --help' for more information. I found this in the manpage: Later Kernels (= 2.6.11-rc1) don't have the ability to NAT to multiple ranges anymore. So my question is: Why is it not possible anymore and is there a workaround? Maybe I should use an other method I don't know yet? EDIT: The idea is to use the system like a router, so I have one address but multiple users behind. The problem is I don't know which connection reffers to a user (for example 1.2.3.4). But I know, they all have different ports open for incomming traffic. So my solution (for DNAT) would be to nat all incoming connections to all users and filter all unused ports, so the connection goes to one single user. For outgoing traffic I would use iptables -A FORWARD -i eth0 -d 1.2.3.4 -m state --state ESTABLISHED,RELATED -j ACCEPT

    Read the article

  • Login failed for user 'XXX' on the mirrored sql server

    - by hp17
    We have 4 web servers that host our asp.net (3.5) application. Randomly, we get error messages like : 1) "Login failed for user 'userid'" 2) "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)" we are running sql2005 and have a principle and a mirror db (sync). When these exceptions are thrown, I look at the SQL error logs on the mirrored db and noticed the failed login messages in there. The principle db is running fine and the other web apps are working great. this will happen for maybe 10 min, then the app pool recycles and it starts hitting the principle db again. Is there a configuration I have incorrect? my theory is that our principle db is forwarding the request to the mirror, but that should never happen. any help??

    Read the article

  • Having trouble setting up my router

    - by indyK1ng
    I just moved into my apartment and the Internet connection is working. It's Comcast in case that matters. Anyway, I'm having trouble setting up my wireless router (Netgear WNR2000) to work with it. Are there any settings that I could be missing? I currently have it set up to use a static IP address and I found the DNS servers I'm supposed to use and the Internet light is green, but I can't get out to the Internet. When I am trying, I'm connecting to an Ethernet port on the back of my router. Is there a setting I'm missing or a setting that I have set wrong? I used the automatic set up wizard to learn that it's a static IP address. Any help would be appreciated. I am currently only able to use my Linux machine, so please make any help in Linux commands. Yes, I can connect to the Internet if I connect to the modem directly and I've been using the web interface when I'm connected to the router, so I suppose I can ping the router. My router detected the connection as using a static IP address, so I connected to the modem directly and figured out what my IP address, gateway, and mask were as well as DNS servers.

    Read the article

  • Delay NTP Initialisation, Cisco 877W, IOS 12.4(24)T1

    - by Mike Insch
    I have a Cisco 877W which I'm using for my home ADSL connection (and as a refresher in Cisco IOS). I've got a working config in-place with my PPPoA connection coming online correctly, and VLANs and other settings configured as I want them, but I can't crack the NTP configuration. For NTP, I have the following defined ntp server 0.uk.pool.ntp.org source Dialer0 ntp server 1.uk.pool.ntp.org source Dialer0 ntp server 2.uk.pool.ntp.org source Dialer0 ntp server 3.uk.pool.ntp.org source Dialer0 This setup works fine when issued in Global Configuration Mode when the Dialer0 interface (ATM0.1) is up. The configuration fails at startup though: Translating "1.uk.pool.ntp.org"...domain server (208.67.222.222) (208.67.220.220) ntp server 1.uk.pool.ntp.org source Dialer0 ^ % Invalid input detected at "^" marker. This is repeated for the other servers defined. Obviously the DNS lookup for the server(s) fails because the DNS servers cannot be accessed because the external interface is not yet online. Is there a way to delay the NTP configuration until afte the Dialer0 interface is fully initialised? Can the NTP commands be triggered by the Line Protocol on the Dialer0 interface transitioning to the up state? Alternatively, can the NTP commands be delayed for 5 minutes after the router has finished initialising? Any advice, or pointers to useful documentation or examples gratefully received ...

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >