Search Results

Search found 21250 results on 850 pages for 'client certificates'.

Page 252/850 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Rebooting access point via SSH with pexpect... hangs. Any ideas?

    - by MiniQuark
    When I want to reboot my D-Link DWL-3200-AP access point from my bash shell, I connect to the AP using ssh and I just type reboot in the CLI interface. After about 30 seconds, the AP is rebooted: # ssh [email protected] [email protected]'s password: ******** Welcome to Wireless SSH Console!! ['help' or '?' to see commands] Wireless Driver Rev 4.0.0.167 D-Link Access Point wlan1 -> reboot Sound's great? Well unfortunately the ssh client process never exits, for some reason (maybe the AP kills the ssh server a bit too fast, I don't know). My ssh client process is completely blocked (even if I wait for several minutes, nothing happens). I always have to wait for the AP to reboot, then open another shell, find the ssh client process ID (using ps aux | grep ssh) then kill the ssh process using kill <pid>. That's quite annoying. So I decided to write a python script to reboot the AP. The script connects to the AP's CLI interface via ssh, using python-pexpect, and it tries to launch the "reboot" command. Here's what the script looks like: #!/usr/bin/python # usage: python reboot_ap.py {host} {user} {password} import pexpect import sys import time command = "ssh %(user)s@%(host)s"%{"user":sys.argv[2], "host":sys.argv[1]} session = pexpect.spawn(command, timeout=30) # start ssh process response = session.expect(r"password:") # wait for password prompt session.sendline(sys.argv[3]) # send password session.expect(" -> ") # wait for D-Link CLI prompt session.sendline("reboot") # send the reboot command time.sleep(60) # make sure the reboot has time to actually take place session.close(force=True) # kill the ssh process The script connects properly to the AP (I tried running some other commands than reboot, they work fine), it sends the reboot command, waits for one minute, then kills the ssh process. The problem is: this time, the AP never reboots! I have no idea why. Any solution, anyone?

    Read the article

  • Windows 2008 R2 remote desktop - Double Login

    - by Zulgrib
    After an Active Directory fail RDP connection started to ask for credentials twice (once on local RDP program, second time on remote's logon screen) I already looked at Windows 2008 R2 RDS - Double Login Solution provided there doesn't work for me. The server is alone, without AD/DNS services, RDP service isn't installed I tried every security settings on RDP-Tcp (RDP, Negotiate, SLL) Logon option is set to "Use credentials from the client" Both windows client and server use RDP 7.1 fPromptForPassword regitries are set to 0 Local Computer Policy\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Security\Always prompt for password upon connection is set to "Disabled" Why i am sure the problem comes from the server and not the client ? This problem affected a 3rd RDP program on Android too (it was directly showing "preparing desktop previously, on both MS RDP and the 3rd program) No bakcup are available (Else the Active Directory wouldn't be a fail, but just a lose of time) I am wondering if a rule linked to RDP got changed after the AD install+unistall, but i'm unable to find where. While this is not a critic problem, this is very annoying. I don't know if more information are needed, if it's the case and if you are patient enough, please tell me what is missing and i'll edit this post to add the missing informations.

    Read the article

  • puppetca never returns anything

    - by mrisher
    Hi: I'm trying to configure Puppet on Ubuntu, and strangely I am never able to generate a certificate because my server never shows any pending certificate requests. Put differently, on the server I am running puppetmasterd and on the client I am able to connect to the server, but the client continues printing notice: Did not receive certificate warning: peer certificate won't be verified in this SSL session and yet the server never sees the request mrisher@lab2$ puppetca --list [nothing shows up] mrisher@lab2$ puppetca --sign clientname.domain.com clientname.domain.com err: Could not call sign: Could not find certificate request for clientname.domain.com Edit: There was a suggestion that autosign was happening, but that does not seem to be it. There is no autosign.conf file, and when I run puppetmasterd --no-daemonize -d -v I receive the following output: info: Could not find certificate for 'clientname.domain.com' every time the client says notice: Did not receive certificate I checked the certs on the server and there don't seem to be any: mrisher@lab2:~$ puppetca --list --all mrisher@lab2:~$ sudo puppetca --list --all + lab2.domain.com // this is the server (master) mrisher@lab2:~$ sudo puppetca --list [blank line] mrisher@lab2:~$ Note: This is mostly running the default install from Ubuntu, if that gives any leads. Thanks for any help out there.

    Read the article

  • Why is site serving different SSL certs to different browsers?

    - by TRiG
    The SSL certificate on menswearireland.com and on www.menswearireland.com works fine on Safari, Chrome, SeaMonkey, K-Meleon, QtWeb, Firefox, and Opera. However, Internet Explorer claims that there is an error: The security certificate presented by this website was not issued by a trusted certificate authority. The security certificate presented by this website was issued for a different website's address. Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server. Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0) Another site hosted on the same managed server shows no errors: achill-fieldschool.com and www.achill-fieldschool.com work fine on IE, even though as far as I can tell the certificate is set up identically. What am I doing wrong? This is a LAMPP server running Plesk. It looks like the server is showing different certificates to different clients. To some clients it shows a RapidSSL certificate made out to www.menswearireland.com with menswearireland.com as a valid alternative name. To other clients, it shows a Parallels Panel certificate, made out to Parallels Panel. Here are results from a few different online SSL checkers: most say it's fine, while two show errors. Three online checkers say it's valid Comodo SSL Check shows it as valid DigiCert SSL Check shows it as valid SSL Shopper SSL Check shows it as valid Common name: www.menswearireland.com SANs: www.menswearireland.com, menswearireland.com Valid from October 2, 2012 to November 4, 2013 Serial Number: 559425 (0x88941) Signature Algorithm: sha1WithRSAEncryption Issuer: RapidSSL CA Another online checker seems to see a completely different certificate GeoCerts SSL Check shows it as invalid Common name: Parallels Panel Organization: Parallels Valid from August 15, 2012 to August 15, 2013 Issuer: Parallels Panel Another online checker sees more than one certificate Symantic SSL Check shows it as invalid The certificate installation checker connected to the Web server and read its certificates, but could not determine which is the primary certificate for the Web server. Incidentally, on both menswearireland.com and achill-fieldschool.com the homepage will redirect from HTTPS to HTTP. To see SSL details, visit the page /account on both (that page will redirect from HTTP to HTTPS). I’ve found more information in a more detailed online SSL checker. https://www.ssllabs.com/ssltest/analyze.html?d=menswearireland.com This site works only in browsers with SNI support My understanding is that SNI (RFC 6066) is a method for putting many SSL sites on one shared IP address and port. This does not work on Internet Explorer on older versions of Windows (this has to do with the version of Windows, not the version of Internet Explorer). However, all our SSL sites are on a unique IP address, so we shouldn’t need SNI.

    Read the article

  • What characteristic of networking/TCP causes linear relation between TCP activity and latency?

    - by DeLongey
    The core of this problem is that our application uses websockets for real-time interfaces. We are testing our app in a new environment but strangely we're noticing an increasing delay in TCP websocket packets associated with an increase in websocket activity. For example, if one websocket event occurs without any other activity in a 1-minute period, the response from the server is instantaneous. However, if we slowly increase client activity the latency in server response increases with a linear relationship (each packet will take more time to reach the client with more activity). For those wondering this is NOT app-related since our logs show that our server is running and responding to requests in under 100ms as desired. The delay starts once the server processes the request and creates the TCP packet and sends it to the client (and not the other way around). Architecture This new environment runs with a Virtual IP address and uses keepalived on a load balancer to balance the traffic between instances. Two boxes sit behind the balancer and all traffic runs through it. Our host provider manages the balancer and we do not have control over that part of the architecture. Theory Could this somehow be related to something buffering the packets in the new environment? Thanks for your help.

    Read the article

  • Are web service handler chains possible under IIS / ASP.NET

    - by Mike
    I'm working with a client who wants me to implement a particular design in an IIS/ASP.NET environment. This design was already implemented in Java, but I am not sure it is possible using Microsoft technologies. In a Tomcat/Java environment one can create so call Handler Chains. In essence a handler runs on the server on which the web service is running and it intercepts the SOAP message coming to the web service. The handler can perform a number of tasks before passing control to the web service. Some of these tasks may refer to authentication and authorization. Moreover, one can create handler chains, such that the handlers can run in a particular sequence before passing control to the web service. This is a very elegant solution, as certain aspects of authentication and authorization can be automatically performed, without the developer of the client application and of the web service having to invest anything in it. The code for the client application and the web service is not affected. You may find a number of articles on internet on this subject by searching on Google for "web service handler chain". I performed searches for web service handlers in IIS or ASP.NET. I get some hits, but apparently handlers in IIS have another meaning than that described above. My question therefor is: Can handler chains (as available in Java and Tomcat) be created in IIS? If so, how (any article, book, forum...)? Either a negative or a positive answer will be greatly appreciated. Mike

    Read the article

  • Mac OS X L2TP VPN won't connect

    - by smokris
    I'm running Mac OS X Server 10.6, providing an L2TP VPN service. The VPN works just fine when connecting from all computers except one --- this one computer stays at the "Connecting..." stage for a while, then says "The L2TP-VPN server did not respond". In the console, I see this: 6/7/10 10:48:07 AM pppd[341] pppd 2.4.2 (Apple version 412.0.10) started by jdoe, uid 503 6/7/10 10:48:07 AM pppd[341] L2TP connecting to server 'foo.bar.baz.edu' (256.256.256.256)... 6/7/10 10:48:07 AM pppd[341] IPSec connection started 6/7/10 10:48:07 AM racoon[342] Connecting. 6/7/10 10:48:07 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 1). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 2). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 3). 6/7/10 10:48:08 AM racoon[342] IKE Packet: receive success. (Initiator, Main-Mode message 4). 6/7/10 10:48:08 AM racoon[342] IKE Packet: transmit success. (Initiator, Main-Mode message 5). 6/7/10 10:48:11 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:14 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). 6/7/10 10:48:17 AM racoon[342] IKE Packet: transmit success. (Phase1 Retransmit). ...and the "retransmit" messages continue until the error message pops up. So far I've unsuccessfully tried: rebooting deleting the VPN profile and recreating it verifying the client's internet connection (it is able to reach the VPN server) connecting through several different networks (in case a router was blocking VPN packets) disabling the Mac OS X Firewall on the client making sure that the VPN settings exactly match those of other working computers running software update (the client is on 10.6.3) Any ideas?

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • BES 5.0 SSL Certificate

    - by Superfly
    I have recently installed BES 5.0 on a Hyper-V (i know it's not officially supported) 64-bit Server 2008 box with a remote SQL 2005 database. I successfully installed and was able to access the Blackberry Administration Service but was getting untrusted certificate errors so I followed the documentation for importing CA and BAS certificates with the Java keytool. They imported successfully but now the BAS webpage shows a "page cannot be displayed" error. TSupport is no help at all. Any ideas?

    Read the article

  • How to add URL's to wiki (MediaWiki) powered documentation?

    - by Ian Boyd
    We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: vmrc://solo.avatopia.com:5901/Windows 2000 Server My first thought was to convert the URL into a link: [vmrc://solo.avatopia.com:5901/Windows 2000 Server] But the content renders literally as above: with the square brackets and all. Testing with other URL protocols: [http://solo.avatopia.com] [ftp://solo.avatopia.com] [ldap://solo.avatopia.com] [vmrc://solo.avatopia.com] Only the first two work, and are converted to hyperlinks. The other two remain as liternal text. How can i add URLs to MediaWiki powered documentation? Original Question We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: \\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/Windows 2000 Server If launching from a command prompt, you have to quote the spaces: C:\>"\\solo\VMRC Client\vmrc.exe" solo.avatopia.com:5901/"Windows 2000 Server" My first thought in converting the above for use on our wiki-site, is to simply HTML-ify it: file://\\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/&quot;Windows 2000 Server&quot; but MediaWiki only converts file://\solo\VMRC to a hyperlink, the remainder is text. i've tried other random things, including enclosing the URL in square brackets. What is the correct answer? i don't want to happen to randomly stumble on some format that happens to work today, and breaks in the future.

    Read the article

  • Certificate Authentication

    - by steve.mccall1
    Hi, I am currently working on deploying a website for staff to use remotely and would like to make sure it is secure. I was thinking would it be possible to set up some kind of certificate authentication where I would generate a certificate and install it on their laptop so they could access the website? I don't really want them to generate the certificates themselves though as that could easily go wrong. How easy / possible is this and how do I go about doing it? Thanks, Steve

    Read the article

  • mod_jk problem: Tomcat is probably not started or is listening on the wrong port

    - by Konrad
    Hi, I am running some application on Tomcat 6.0.26. There is Apache in front of web server talking to it over mod_jk. Every few hours when I try to access application browser simply spins, and no content is retrieved. No error is reported in Tomcat logs, but I fond such errors in mod_jk log: [Sun Jul 04 21:19:13 2010][error] ajp_service::jk_ajp_common.c (1758): Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=***** failed [Sun Jul 04 21:19:13 2010][info] jk_handler::mod_jk.c (1985): Service error=0 for worker==***** [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 45 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_service::jk_ajp_common.c (1721): Receiving from tomcat failed, recoverable operation attempt=0 my worker is configured in following way: worker.admanagonode.port=8009 worker.admanagonode.host=*****.com worker.admanagonode.type=ajp13 worker.admanagonode.ping_mode=A worker.admanagonode.socket_timeout=60 worker.admanagonode.prepost_timeout=10000 worker.admanagonode.connect_timeout=10000 worker.admanagonode.connection_pool_size=200 worker.admanagonode.connection_pool_timeout=300 worker.admanagonode.retries=20 worker.admanagonode.socket_keepalive=1 worker.admanagonode.cachesize=10 worker.admanagonode.cache_timeout=600 Tomcat has same port number in Connector configuration: <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" address="*********" /> Does any of you has any ideas what i am missing? What can cause such problems? Cheers Konrad

    Read the article

  • VPN on OSX disconnects after precisely 2 minutes and 30 seconds on specific network

    - by Tyilo
    When connecting to my own VPN server on a specific network, called public-network, my Mac disconnects the VPN connection after 2 minutes and 30 seconds. I have performed several tests and this is the result: It works fine until the 2:30 mark It doesn't matter which Mac I use, it still disconnects It doesn't matter which client I use, all of the following does the same: OSX system client, HMA! Pro VPN and Shimo It doesn't matter which protocol I use, at least all of these protocols does the same: PPTP, OpenVPN and L2TP over IPSec The same thing happens using my own VPN server and HMA!'s VPN server. All other clients (Windows/iPhone) can use any of these VPN servers and protocols without problem on public-network On OSX, all the protocols, clients and servers works fine on any other network So it seems that it is the combination of OSX, VPN & public-network that causes this. This is the syslog from my VPN server, when the disconnection happens: Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: EOF or bad error reading ctrl packet length. Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: couldn't read packet header (exit) Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: CTRL read failed Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: Reaping child PPP[31401] Feb 2 12:04:32 raspberrypi pppd[31401]: Hangup (SIGHUP) Feb 2 12:04:32 raspberrypi pppd[31401]: Modem hangup Feb 2 12:04:32 raspberrypi pppd[31401]: Connect time 2.5 minutes. Feb 2 12:04:32 raspberrypi pppd[31401]: Sent 3963649 bytes, received 362775 bytes. Feb 2 12:04:32 raspberrypi pppd[31401]: MPPE disabled Feb 2 12:04:32 raspberrypi pppd[31401]: Connection terminated. Feb 2 12:04:32 raspberrypi pppd[31401]: Exit. Feb 2 12:04:32 raspberrypi pptpd[31400]: CTRL: Client <ip-adress> control connection finished

    Read the article

  • Port translation in router causing some email to fail

    - by user22037
    We are in the process of setting up a spam filter (SAVASM). One change we are making is to push incoming email on port 25 through our spam filter/server but have users actually send their email on a different port. I am attempting to make this happen by using port address translation to send port 25 traffic to the SAVASM server IP. As a step in making this change I setup port translation without actually changing the IP addresses. The NAT rules for the email server went from one Static NAT rule with no port specified, to multiple Static NAT rules each with a port or group matching the Access Rules for that server (smtp, pop3, http, https, and some other custom ports). The problem we are running into is confusing. Some outgoing mail through this server is failing when the router has the multiple NAT rules with port translation settings. Email goes through fine FROM our email to our internal accounts and to Gmail. However email fails when FROM our client's email address TO our client's email or their personal Comcast. The only situation that worked for them was if they changed FROM to Comcast and then messages went through fine to both Comcast and the client's accounts. Switching back to regular Static NAT rule everything then worked for them. Does anyone have a clue as to what might be going on? We are on a Cisco ASA 5500 box.

    Read the article

  • DD-WRT router causing IP address conflicts across network

    - by r.tanner.f
    My DD-WRT router has lost its mind! I just set up two DD-WRT routers, one as a WAP (working fine) and one in Client Bridge (routed) mode (the problem). Not long after setup I started seeing IP address conflicts on other machines. The event log always points the finger at my Client Bridge router's MAC address. Neighbour table overflow The log on my router is flooded with Neighbour table overflow errors. These start a minute or two after boot. The network is rather large, with +200 IP addresses being used in this subnet. The other router shows no such errors. Mass ARP requests from 1.1.1.1 I'm also seeing constant ARP requests (with the problem router's MAC address) from 1.1.1.1. Seems like it's bugging everything on the network for its MAC address and then promptly forgetting it (or never receiving a response). Configuration: Model: Buffalo N600 Firmware: DD-WRT v24SP2-MULTI (03/21/11) Wireless Mode: Client Bridge (routed) I'm not sure what configuration details are relevant and I'd rather not have comments flooded, so just ping me in this chat if you want to know something. Why is my router stealing IP addresses and how can I stop it?

    Read the article

  • Iptables - Redirect outbound traffic on a port to inbound traffic on 127.0.0.1

    - by GoldenNewby
    I will be awarding a +100 bounty to the correct answer once it is available in 48 hours Is there a way to redirect traffic set to go out of the server to another IP, back to the server on localhost (preferably as if it was coming from the original destination)? I'd basically like to be able to set up my own software that listens on say, port 80, and receives traffic that was sent to say, 1.2.3.4. So as an example with some code. Here would be the server: my $server = IO::Socket::INET->new( LocalAddr => '127.0.0.1', LocalPort => '80', Listen => 128, ); And that would receive traffic from the following client: my $client = IO::Socket::INET->new( PeerAddr => 'google.com', PeerPort => '80', ) So rather than having the client be connecting to google.com, it would be connecting to the server I have listening on localhost for that same server. My intention is to use this to catch malware connecting to remote hosts. I don't specifically need the traffic to be redirected to 127.0.0.1, but it needs to be redirected to an IP the same machine can listen to. Edit: I've tried the following, and it doesn't work-- echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 127.0.0.1:80 iptables -t nat -A POSTROUTING -j MASQUERADE

    Read the article

  • can't Remote desktop to windows XP, blaming the server side

    - by Jin
    After rebooting my work PC (windows XP sp3) this Wednesday (thank to Microsoft Tuesday), I found that I can't remote desktop to my work PC from home (with VPN to company). I have been remote-desktop to work for years and I am really surprised since connectivity is not the problem, so I brought up wireshark to sniff the packets. I can see after TCP handshake, client sent X.224 Connection Request 03 00 00 13 0e e0 00 00 00 00 00 01 00 08 00 03 00 00 00 server sent X.224 Connection Confirm. 03 00 00 0b 06 d0 00 00 12 34 00 According to "MS-RDPBCGR", the official spec on RDP, the server should include Negotiation Response in the "Connection Confirm" message but it didn't. It's empty. I googled a lot but didn't find any clue on why server did that. By the way, I used the same remote desktop client and can connect to other windows XP PC. Here are a couple of pieces of information that may help to give a clue: Since TCP handshake (server port being 3389), I believe the svchost service is actually running. going to control panel -- system window, --- "Remote" tab, the remote desktop is indeeded checked and it states that my username is allowed. according to the packet capture, client didn't even get a chance to tell server what user was trying to logon. Yes, the progress bar showed up a few seconds and then it went back to the "Remote desktop Connection" window again. Searched "windowsupdate.log", didn't find any appearance of the word "remote".

    Read the article

  • Windows RDP cannot connect to x64 server from XP SP3+

    - by Tom
    Hi all, I have a strange problem that I can't seem to find the answer to anywhere online. The issue has to do with using Windows RDP to connect to our servers. Here is what works: -XP/Vista client (any SPs) connecting to 32-bit Server 2003 machine -XP (SP2 and lower) client conecting to 64-bit Server 2003 machine Here is what does not work: - XP SP3+/Vista client connecting to 64-bit Server 2003 machine It appears that the issue is that XP SP3 and Vista clients cannot connect to x64 Server 2003 boxes. After entering the username/password, we get an error message saying the below, and the connection drops: To log on to this remote computer, you must have Terminal Server User Access persmissions on this computer. By default, members of the Remote Desktop Users group have these permissions. If you are not a member of the Remote Desktop Users group or another group that has these persmissions, or if the Remote Desktop User group does not have these permissions, you must be granted these permissions manually. The issue is that the user is a member of the Administrators group, which has permission. Also, logging in using the same username, but from an XP SP2 machine, has no problems at all. I hope I explained this well enough, and any help/insight that can be given would be greatly appreciated. Thanks, Tom

    Read the article

  • Failover Issuer CAs without Clustering

    - by James Santiago
    I am attempting to setup a Certificate Authority with some failover capabilities for the issuer CAs. I have an offline root CA and am attempting to setup two subordinate CAs on our domain which will handle issuing certificates. I'm trying to determine the architecture needed for these two CAs to allow one to go down and the other to take over without the use of failover clustering, as the two are in different geographic locales. Are there documents regarding this setup?

    Read the article

  • Windows RDP cannot connect to x64 server from XP SP3+ [closed]

    - by Tom
    Hi all, I have a strange problem that I can't seem to find the answer to anywhere online. The issue has to do with using Windows RDP to connect to our servers. Here is what works: -XP/Vista client (any SPs) connecting to 32-bit Server 2003 machine -XP (SP2 and lower) client conecting to 64-bit Server 2003 machine Here is what does not work: - XP SP3+/Vista client connecting to 64-bit Server 2003 machine It appears that the issue is that XP SP3 and Vista clients cannot connect to x64 Server 2003 boxes. After entering the username/password, we get an error message saying the below, and the connection drops: To log on to this remote computer, you must have Terminal Server User Access persmissions on this computer. By default, members of the Remote Desktop Users group have these permissions. If you are not a member of the Remote Desktop Users group or another group that has these persmissions, or if the Remote Desktop User group does not have these permissions, you must be granted these permissions manually. The issue is that the user is a member of the Administrators group, which has permission. Also, logging in using the same username, but from an XP SP2 machine, has no problems at all. I hope I explained this well enough, and any help/insight that can be given would be greatly appreciated. Thanks, Tom

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Can't upgrade my Ubuntu server, it gets stuck on openjdk-6-jre-headless

    - by Jean-Nicolas Boulay Desjardins
    I am using Ubuntu Server. When I do: apt-get upgrade it gets stuck on: Setting up openjdk-6-jre-headless (6b20-1.9.7-0ubuntu1) ... Why? And what can I do to stop it? I tried removing it with apt-get... I get this error: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So then I tried this: dpkg --purge openjdk-6-jre-headless I got this: dpkg: dependency problems prevent removal of openjdk-6-jre-headless: openjdk-6-jre-lib depends on openjdk-6-jre-headless (>= 6b17). ca-certificates-java depends on openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless; however: Package openjdk-6-jre-headless is to be removed. Package java6-runtime-headless is not installed. Package openjdk-6-jre-headless which provides java6-runtime-headless is to be removed. ca-certificates-java depends on openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless; however: Package openjdk-6-jre-headless is to be removed. Package java6-runtime-headless is not installed. Package openjdk-6-jre-headless which provides java6-runtime-headless is to be removed. dpkg: error processing openjdk-6-jre-headless (--purge): dependency problems - not removing Errors were encountered while processing: openjdk-6-jre-headless The thing is I think my DB is using it... Not sure... I am using Cassandra with Thrift... Yes, it's getting a bit more complex... # dpkg --configure -a I get: dpkg: dependency problems prevent configuration of openjdk-6-jre: openjdk-6-jre depends on openjdk-6-jre-headless (>= 6b20-1.9.7-0ubuntu1); however: Package openjdk-6-jre-headless is not configured yet. dpkg: error processing openjdk-6-jre (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place dpkg: dependency problems prevent configuration of libaccess-bridge-java: libaccess-bridge-java depends on default-jre | openjdk-6-jre | sun-java6-jre; however: Package default-jre is not installed. Package openjdk-6-jre is not configured yet. Package sun-java6-jre is not installed. dpkg: error processing libaccess-bridge-java (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of icedtea-6-jre-cacao: icedtea-6-jre-cacao depends on openjdk-6-jre-headless (= 6b20-1.9.7-0ubuntu1); however: Package openjdk-6-jre-headless is not configured yet. dpkg: error processing icedtea-6-jre-cacao (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libaccess-bridge-java-jni: libaccess-bridge-java-jni depends on libaccess-bridge-java (>= 1.26.2-5); however: Package libaccess-bridge-java is not configured yet. dpkg: error processing libaccess-bridge-java-jni (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: openjdk-6-jre libaccess-bridge-java icedtea-6-jre-cacao libaccess-bridge-java-jni Thanks again for any help.

    Read the article

  • Connect over WiFi to SQL Server from another computer

    - by Bronzato
    I tried to connect over WiFi to SQL Server with SQL Server Management Studio from another computer, but it failed. I have a computer with Windows 7 & SQL Server 2008 (lets say the server computer). Next to it I have a freshly installed computer with Windows 7 & SQL Server Management Studio (let's say the client computer). What I did on the server computer: Configure firewall by enabling port 1433 Enabled network protocols (TCP/IP) inside SQL Server Configuration Manager Checked Allow remote connections to this server in server properties in the SQL Server Management application. Started SQL Server Browser Restarted services (SQL Server Browser is stopped at this point, but I don't think it is necessary. Is it?) Next, I successfully tested a ping on the port 1433 from my client computer with a tool named tcping (ex: tcping 192.168.1.4 1433). But I still cannot connect from my client computer to SQL Server on my server computer. Ok, something new with this problem: Until now, I successfully connected to my "server computer" with Management Studio. What I did is type the computer name in the server name field in the connection window of Management Studio. My previous (failed) attempt was to type the computer name followed by the instance of SQL server (ex: COMPUTER_NAME\SQL2008). I don't know why I only have to type the computer name. Now my new challenge is to be successful in connecting my VB6 application to this remote database located on my "server computer". I have a connection string for this but it failed to connect. Here is my connection string: "Provider=SQLOLEDB.1;Password=mypassword;User ID=sa;Initial Catalog=TPB;Data Source=THIERRY-HP\SQL2008" Any idea what's going wrong?

    Read the article

  • How to enable synergy 24800 (or some other port) through firewalld

    - by ndasusers
    After upgrading to Fedora 18, Synergy, the keyboard sharing system was blocked by default. The culprit was firewalld, which happily ignored my previous settings made in the Fedora GUI, backed by iptables. ~]$ ps aux | grep firewall root 3222 0.0 1.2 22364 12336 ? Ss 18:17 0:00 /usr/bin/python /usr/sbin/firewalld --nofork david 3783 0.0 0.0 4788 808 pts/0 S+ 20:08 0:00 grep --color=auto firewall ~]$ Ok, so how to get around this? I did sudo killall firealld for several weeks, but that got annoying every time I rebooted. It was time to look for some clues. There were several one liners, but they did not work for me. They kept spitting out the help text. For example: ~]$ sudo firewall-cmd --zone=internal --add --port=24800/tcp [sudo] password for auser: option --add not a unique prefix Also, posts that clamied this command worked also stated it was temporary, unable to survive a reboot. I ended up adding a file to the config directory to be loaded in on boot. Would anyone be able to have a look at that and see if I missed something? Though synergy works, when I run the list command, I get no result: ~]$ sudo firewall-cmd --zone=internal --list-services ipp-client mdns dhcpv6-client ssh samba-client ~]$ sudo firewall-cmd --zone=internal --list-ports ~]$

    Read the article

  • Are applications optimized for XenApp?

    - by damitamit
    Our IT dept are about to deploy virtualized apps using Citrix XenApp. One of these apps will be Dynamics AX 4.0 SP2, a ERP client (which I develop on). They have supposedly reached a roadblock because an external 'Dynamics AX Consultant' has told our IT Dept that Dynamics 4 will not work optimally on Citrix and will run very slow because it is not optimized for Citrix. We have it running in a Test environment now and seems ok. They have been told that the only 'solution' is to upgrade to Dynamics AX 2009 where supposedly this problem as been fixed. (not a small task for my team!) When I heard about this, I was quite surprised. From my brief knowledge of Citrix, I thought it would be application independent. How does the citrix app virtualization work, in that a particular app would work better than others on citrix? Would the speed of the virtualized app not just depend on the resources/network connection the citrix server has? FYI, Dynamics AX is a 3-tier client/server system, so the client will be accessing a AOS application server, which then accesses the database. Please enlighten me :)

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >