Search Results

Search found 17081 results on 684 pages for 'request tracking'.

Page 126/684 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by LOGIC9
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • Force SSL on one page via .htaccess without looping

    - by Will Martin
    Okay, I have this code: RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/borrowing/ill/request\.php$ RewriteRule ^.*$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L] The way I would expect this to work is: A request for /borrowing/ill/request.php comes in on HTTP. The rule matches. The server redirects to HTTPS. The rule does not match, because HTTPS is now on. The way it actually works is: A request for /borrowing/ill/request.php comes in on HTTP. The rule matches. The server redirects to HTTPS. The rule matches. The server redirects to HTTPS. The rule matches. The server redirects to HTTPS ... And so on. I know that the second condition (matching the file name) is working, because the redirect loop only hits that specific page. The question is, why isn't the switch to HTTPS causing the first condition to not match? EDIT: I put the exact same .htaccess rules into a test area on another server -- same file and path info. And they worked just fine. There's got to be something wrong with the server configuration elsewhere.

    Read the article

  • Diagnosing xmodmap errors

    - by intuited
    I'm getting this error when trying to use xmodmap to get rid of caps lock: $ xmodmap -e 'clear Lock' X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 118 (X_SetModifierMapping) Value in failed request: 0x17 Serial number of failed request: 8 Current serial number in output stream: 8 I'm running xfce on Maverick "10.10" Meercat. This problem did not occur before I added the Keyboard Layouts applet to a panel; before doing that, I was able to run my xmodmap script to swap Esc and CapsLock: !Remap Caps_Lock as Escape remove Lock = Caps_Lock keysym Caps_Lock = Escape It may be relevant that I chose alt-capslock as the keyboard switch combo in the Keyboard Layouts preferences. I've had a similar problem before, on a different machine, running openbox. On that machine, this problem started when I upgraded to Lucid, and has persisted in Maverick (release 10.10). I reported a bug in xorg. However, it remains unclear whether it's really a problem with xorg, or if I'm just doing something wrong with my configuration. Have other people experienced this problem? Can someone shed some light on what's going on here? It seems there are quite a few layers involved, and I don't understand any of them particularly well, so any information would be helpful. update I've discovered that the problem is specifically triggered by adding the Canada layout variant "Multilingual" (ca-multix). If I instead add the variant "Multilingual (first part)", the problem does not occur. I think this will probably end up being a usable workaround, but I don't yet know what the difference between these variants is. I've filed a freedesktop issue, and am commenting on a related ubuntu issue.

    Read the article

  • Pfsense 2.1 OpenVPN can't reach servers on the LAN

    - by Lucas Kauffman
    I have a small network set up like this: I have a Pfsense for connecting my servers to the WAN, they are using NAT from the LAN - WAN. I have an OpenVPN server using TAP to allow remote workers to be put on the same LAN network as the servers. They connect through the WAN IP to the OVPN interface. The LAN interface also servers as the gateway for the servers to get internet connection and has an IP of 10.25.255.254 The OVPN Interface and the LAN interface are bridged in BR0 Server A has an IP of 10.25.255.1 and is able to connect the internet Client A is connecting through the VPN and is assigned an IP address on its TAP interface of 10.25.24.1 (I reserved a /24 within the 10.25.0.0/16 for VPN clients) Firewall currently allows any-any connection OVPN towards LAN and vice versa Currently when I connect, all routes seem fine on the client side: Destination Gateway Genmask Flags Metric Ref Use Iface 300.300.300.300 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.25.0.0 10.25.255.254 255.255.0.0 UG 0 0 0 tap0 10.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tap0 0.0.0.0 300.300.300.300 0.0.0.0 UG 0 0 0 eth0 I can ping the LAN interface: root@server:# ping 10.25.255.254 PING 10.25.255.254 (10.25.255.254) 56(84) bytes of data. 64 bytes from 10.25.255.254: icmp_req=1 ttl=64 time=7.65 ms 64 bytes from 10.25.255.254: icmp_req=2 ttl=64 time=7.49 ms 64 bytes from 10.25.255.254: icmp_req=3 ttl=64 time=7.69 ms 64 bytes from 10.25.255.254: icmp_req=4 ttl=64 time=7.31 ms 64 bytes from 10.25.255.254: icmp_req=5 ttl=64 time=7.52 ms 64 bytes from 10.25.255.254: icmp_req=6 ttl=64 time=7.42 ms But I can't ping past the LAN interface: root@server:# ping 10.25.255.1 PING 10.25.255.1 (10.25.255.1) 56(84) bytes of data. From 10.25.255.254: icmp_seq=1 Redirect Host(New nexthop: 10.25.255.1) From 10.25.255.254: icmp_seq=2 Redirect Host(New nexthop: 10.25.255.1) I ran a tcpdump on my em1 interface (LAN interface which has the IP of 10.25.255.254) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on em1, link-type EN10MB (Ethernet), capture size 96 bytes 08:21:13.449222 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 10, length 64 08:21:13.458211 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:14.450541 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 11, length 64 08:21:14.458431 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:15.451794 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 12, length 64 08:21:15.458530 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:16.453203 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 13, length 64 So traffic is reaching the LAN interface, but it's not getting passed it. But no answer from the 10.25.255.1 host. I'm not sure what I'm missing.

    Read the article

  • pasenger does not start puppet master under nginx

    - by Anadi Misra
    On the server [root@bangvmpllDA02 logs]# ruby -v ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux] [root@bangvmpllDA02 logs]# puppet --version 3.0.1 and [root@bangvmpllDA02 logs]# service nginx configtest nginx: the configuration file /apps/nginx/nginx.conf syntax is ok nginx: configuration file /apps/nginx/nginx.conf test is successful [root@bangvmpllDA02 logs]# service nginx status nginx (pid 25923 25921 25920 25917 25908) is running... [root@bangvmpllDA02 logs]# however none of my agents are able to connect to the master, they all fail with errors like so [amisr1@blramisr195602 ~]$ puppet agent --test --verbose --server bangvmpllda02.XXX.com Info: Creating a new SSL certificate request for blramisr195602.XXX.com Info: Certificate Request fingerprint (SHA256): 26:EB:08:1F:82:32:E4:03:7A:64:8E:30:A3:99:93:26:E6:66:B9:B0:49:B6:08:F9:67:CA:1B:0C:00:B9:1D:41 Error: Could not request certificate: Error 405 on SERVER: <html> <head><title>405 Not Allowed</title></head> <body bgcolor="white"> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx</center> </body> </html> Exiting; failed to retrieve certificate and waitforcert is disabled when I check logs on puppet master [root@bangvmpllDA02 logs]# tail puppet_access.log [05/Dec/2012:17:45:18 +0530] "GET /production/certificate/ca? HTTP/1.1" 404 162 "-" "Ruby" [05/Dec/2012:18:32:23 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" and the error logs show that nginx is not really able to process the request well 2012/12/05 18:33:33 [error] 25920#0: *23 open() "/etc/puppet/rack/public/production/certificate/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:33:33 [error] 25920#0: *24 open() "/etc/puppet/rack/public/production/certificate_request/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *27 open() "/etc/puppet/rack/public/production/certificate/ca" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate/ca? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *28 open() "/etc/puppet/rack/public/production/certificate_request/blramisr195602.XXX.com" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate_request/blramisr195602.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" Passenger does not show any application groups either [root@bangvmpllDA02 nginx]# passenger-status ----------- General information ----------- max = 15 count = 0 active = 0 inactive = 0 Waiting on global queue: 0 ----------- Application groups ----------- [root@bangvmpllDA02 nginx]# here's my nginx configuration [root@bangvmpllDA02 logs]# cat ../nginx.conf user puppet; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; #pid logs/nginx.pid; events { use epoll; worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; server_tokens off; #keepalive_timeout 0; keepalive_timeout 120; gzip on; gzip_http_version 1.1; gzip_disable "msie6"; gzip_vary on; gzip_min_length 1100; gzip_buffers 64 8k; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml; server { listen 80; server_name bangvmpllda02.XXXX.com; charset utf-8; #access_log logs/http.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { access_log off; log_not_found off; deny all; } location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires 2d; } } # Passenger needed for puppet passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18; passenger_ruby /usr/bin/ruby; passenger_max_pool_size 15; server { ssl on; listen 8140 default ssl; server_name bangvmpllda02.XXXX.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } } and the puppet.conf [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet dns_alt_names = devops.XXXX.com,devops confdir = /etc/puppet vardir = /var/lib/puppet storeconfigs = true storeconfigs_backend = puppetdb thin_storeconfigs = false async_storeconfigs = false ssl_client_header = SSL_CLIENT_S_D ssl_client_verify_header = SSL_CLIENT_VERIFY # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl any ideas where am I going wrong? I checkthe directory permissions; /usr/share/puppet, /etc/puppet and /var/lib/puppet (and files inside them) are owned by puppet user.

    Read the article

  • Partner Infoline & Service Portal

    - by uwes
    As an EMEA-wide team we're supporting the daily work of our partners. Our team consists of 24 sales consultants, one third is specialized on the Partner Infoline. Partner Infoline's main focus is to deliver actively and reactively technical pre sales knowledge about the Oracle hardware portfolio to our partners.With infoline we assist our partners in their daily work, furthermore we help to educate our partners to be self sufficient in all aspects and questions about hardware configurations and hardware quotes. For our Infoline Service we use a ticketing system called Service Portal which is widely used within Oracle and delivers a good and stable functionality and availability. Our Infoline-Service provides answers to questions concerning technical pre-sales matters that are related to hardware and the corresponding hardware related software.* You can address these types of questions by sending them to our mailing list: [email protected] The serviceportal will send you an auto-reply including a unique reference number, which will be the identification for your request until it is closed. Depending on the complexity of the request, it might be necessary to forward it to our specialists (servers, storage, tape, Solaris etc.) located whole over Europe. In order to make the whole process smooth here are some recommendations: write your request in English; saves translation-time, when it has to be forwarded to the specialists stating clearly in the title your interest area, like for example "memory in M4000 server". one request/one subject; makes it easier to maintain and keep the correspondence clear and simple. The rule of the service is to provide an answer quick, which means the vast majority of the requests are answered within a couple of hours. However please keep in mind that some requests may need extra work by involving the appropriate person within Europe or even in US. Therefore there is no official SLA for this service. * This excludes Oracle "classic" products and post-sales support. The latter should still be addressed through MOS (http://support.oracle.com)

    Read the article

  • Admob banner not getting remove from superview

    - by Anil gupta
    I am developing one 2d game using cocos2d framework, in this game i am using admob for advertising, in some classes not in all classes but admob banner is visible in every class and after some time game getting crash also. I am not getting how admob banner is comes in every class in fact i have not declare in Rootviewcontroller class. can any one suggest me how to integrate Admob in cocos2d game, i want Admob banner in particular classes not in every class, I am using latest google admob sdk, my code is below: Thanks in advance ` -(void)AdMob{ NSLog(@"ADMOB"); CGSize winSize = [[CCDirector sharedDirector]winSize]; // Create a view of the standard size at the bottom of the screen. if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad){ bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-364, size.height - GAD_SIZE_728x90.height, GAD_SIZE_728x90.width, GAD_SIZE_728x90.height)]; } else { // It's an iPhone bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-160, size.height - GAD_SIZE_320x50.height, GAD_SIZE_320x50.width, GAD_SIZE_320x50.height)]; } if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { bannerView_.adUnitID =@"a15062384653c9e"; } else { bannerView_.adUnitID =@"a15062392a0aa0a"; } bannerView_.rootViewController = self; [[[CCDirector sharedDirector]openGLView]addSubview:bannerView_]; [bannerView_ loadRequest:[GADRequest request]]; GADRequest *request = [[GADRequest alloc] init]; request.testing = [NSArray arrayWithObjects: GAD_SIMULATOR_ID, nil]; // Simulator [bannerView_ loadRequest:request]; } //best practice for removing the barnnerView_ -(void)removeSubviews{ NSArray* subviews = [[CCDirector sharedDirector]openGLView].subviews; for (id SUB in subviews){ [(UIView*)SUB removeFromSuperview]; [SUB release]; } NSLog(@"remove from view"); } //this makes the refreshTimer count -(void)targetMethod:(NSTimer *)theTimer{ //INCREASE OF THE TIMER AND SECONDS elapsedTime++; seconds++; //INCREASE OF THE MINUTOS EACH 60 SECONDS if (seconds>=60) { seconds=0; minutes++; [self removeSubviews]; [self AdMob]; } NSLog(@"TIME: %02d:%02d", minutes, seconds); } `

    Read the article

  • Does placing Google Analytics code in an external file affect statistics?

    - by Jacob Hume
    I'm working with an outside software vendor to add Google Analytics code to their web app, so that we can track its usage. Their developer suggested that we place the code in an external ".js" file, and he could include that in the layout of his application. The StackOverflow question "Google Analytics: External .js file covers the technical aspect, so apparently tracking is possible via an external file. However, I'm not quite satisfied that this won't have negative implications. Does including the tracking code as an external file affect the statistics collected by Google?

    Read the article

  • Mod Rewrite - directing HTTP/HTTPS traffic to the appropriate virtual hosts

    - by kce
    I have an Apache2 web server (v. 2.2.16) running on Debian hosting three virtual hosts. The first two hosts are HTTP only (server1 and server2). The last host is HTTPS only (server3). My virtual host configuration files can be found at pastebin. I would like to use mod rewrite to get the following behavior: Any request for http://server3 is re-directed to https://server3 Any request for either https://server1 or https://server2 is re-directed to http://server1 or http://server2 as appropriate. Currently, requesting http://server3 gives you a 403 because indexing is disabled for that host and a request for https://server1 or https://server2 will resolve as https://server3 (as its the only virtual host running SSL). This behavior is not desirable. So far I have added a rewrite rule to the central configuration file (myServerWideConfs.conf), with unfortunately no effect. I was under the impression that this rule (or something similar) should rewrite all https:// requests for server1 and server2 to the proper http:// request. RewriteEngine On RewriteCond %{HTTP_HOST} !^server3 [NC] RewriteRule (.*) http://%{HTTP_HOST} My question is two-fold: What mod rewrite rules should I use to accomplish this? And where should they go? Debian's packaging of Apache has a pretty granular (i.e., fractured) configuration file layout; should my rewrite rules go in /etc/apache2/apache2.conf, /etc/apache2/conf.d/myServerWideConfs.conf, or the individual virtual host files? Is mod rewrite the right tool to accomplish this or am I missing something in my greater apache configuration?

    Read the article

  • SOA Suite HealthCare Integration Architecture

    - by Nitesh Jain
    Oracle SOA Suite for HealthCare integration is an integrated, best-of-breed suite that helps HealthCare organizations rapidly design and assemble, deploy and manage, highly agile and adaptable business applications.It  will help healthcare industry to  reduce operating costs and speeds time-to-market by delivering a consistent user interface, management console and monitoring environment, as well as healthcare libraries and templates for healthcare customer projects.Oracle SOA Suite for healthcare integration is fully configurable and extensible, providing a highly flexible platform for collaboration across all healthcare domains.Healthcare message standards support:    Messaging standards - HL7, HIPAA, Custom , X12N    Exchange standards - MLLP (v1.0, v2.0), TCP/IP, File, FTP, SFTP, JMSSimplified dashboards and customized reports helps users to advanced monitoring capabilities that support end-to-end healthcare message tracking.A toolkit for rapid HIPAA 5010 upgrade and compliance provides pre-defined healthcare integration mapping for HIPAA standards that is fully customizable and extensible.MLLP-HA helps easily failover and disaster recovery which makes system running on the long time without any issue.Audit keeps track of all the system changes. Alert and notification (SMS,Email etc) helps user to take the fast action and gives tracking on the real-time.

    Read the article

  • ownCloud WebDAV interface seems to be broken

    - by Nobleleader13245
    I've been trying to host ownCloud on my server but everytime I try to it tells me this : Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides. This is my setup : Windows Server 2012 R2 IIS 8.5 PHP 5.5.11 ownCloud 6.0.3 MySQL 5.6.17 I tried google the error but I can't seem to find anything usefull. Some say I should try if this works : [hostname]/remote.php/webdav/ and yes I can navigate to this folder and I can open files from there. The calendar works and I can also just upload files the web version of ownCloud the only thing that doesn't seem to work is the sync client. The sync client doesn't say anything it just doesn't connect (Screenshot : http://prntscr.com/3p2apz) This is the error log : Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:56:00+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:47+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:37+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:51:24+00:00 This is my php.ini : http://pastebin.com/es3MB8Uh Does anyone have any idea on how I should get this to work? I've been trying to get this to work for about 14 days now and it starts to annoy me =P

    Read the article

  • ownCloud WebDAV interface seems to be broken

    - by Nobleleader13245
    I've been trying to host ownCloud on my server but everytime I try to it tells me this : Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides. This is my setup : Windows Server 2012 R2 IIS 8.5 PHP 5.5.11 ownCloud 6.0.3 MySQL 5.6.17 I tried google the error but I can't seem to find anything usefull. Some say I should try if this works : https://cloud.mcsoftworks.net/remote.php/webdav/ and yes I can navigate to this folder and I can open files from there. The calendar works and I can also just upload files from here https://cloud.mcsoftworks.net/ the only thing that doesn't seem to work is the sync client. The sync client doesn't say anything it just doesn't connect (Screenshot : http://prntscr.com/3p2apz) This is the error log : Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:56:00+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:47+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:37+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:51:24+00:00 This is my php.ini : http://pastebin.com/es3MB8Uh Does anyone have any idea on how I should get this to work? I've been trying to get this to work for about 14 days now and it starts to annoy me =P

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Random Slow Response

    - by ARehman
    We have an ASP.NET MVC 1.0 application running on Windows Server 2008 – Standard (32 –bit), Dual Core Xeon (3.0 GHz), 2 G.B R.A.M. Most of the times application renders response in 3-4 seconds, but sometimes users get very late response and delay is up to 40 seconds or more than a minute. It happens in following way: User browsed a page, idle for 5, 10 or 15 minutes, tried to browse same page or some other. Now, there is a chance that he will see late response whereas the app pool is still up and running. This can happen with any arbitrary page. We have tried followings/observations. Moved the application to stand alone web server App Pool idle shutdown time is 60 minutes. There are no abrupt shut downs/restarts. CPU or memory doesn’t spike. No delays in SQL queries. Modified App Pool setting to run in classic-mode. It didn’t help. Plugged-in custom module to log all those requests which took more than 5 seconds to complete. It didn’t pick any request of interest. Enabled ‘Failed Request Tracing’ to log all those requests which take 20 or more seconds to complete. It didn’t log anything. Event Viewer, HTTPER log, W3SVC logs or WAS logs don’t indicate anything. HTTPERR only has ‘_ _ Timer_ConnectionIdle _ _’ entries. There is not much traffic to server. This can happen also if only two users are active. Next we captured TCP/IP terrific on both a user and server end with Wireshark and below are details in brief of this slowness: Browser sends a request for ~/User/Home/ (GET Request) by setting up a receiving end point using port 'wlbs(port-2504)'. I'm not sure if this could be a problem in some way that browser didn't hand-shake with the server first and assumed that last connection is still open, whereas, I browsed the same page 4 minutes ago and didn't perform any activity with site after that. If I see the HTTPERR log, it indicates that it has ‘_ _ Timer_ConnectionIdle _ _ _’ entry for my last activity with server. Browser (I was using Chrome) waits for any response from the server, doesn’t find any then starts retransmitting the same request using same end point after incrementing wait intervals, e.g. after 8, 18, 29, 40, 62, and 92 seconds. All these GET requests were received by server as well. But, server didn’t send any packet to client. Browser didn't see any response on the end point it set up in point 1, it opened a new end point 'optiwave-lm (port-2524)', did a hand shake with the server and transmitted the same request again. Server received, processed it, and returned successful response. What happened to earlier 6-7 requests? Whether they were passed on to HTTP.SYS or not? Why Failed Request Tracing not logged anything, we didn't find any clue yet. Server served the same page successfully just 4 minutes ago. Looking forward for more suggestions/solutions. -- Thanks

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • Log requests in apache upon arrival

    - by Patrick Cornelissen
    Is it possible to log requests in the apache when they arrive not when they have been processed? We have a pretty big web based application and the way it's currently logged is sometimes confusing to read, because each request with it's subrequests is logged "upside down", because apache logs the request after it has been processed, so the initial request is the last entry for this "query". We're wondering if there is a chance to log on arrival?!

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • New install of Steam not running on new install of Ubuntu 13.10

    - by inferKNOX
    I tried purging steam, un-installing and reinstalling steam, deleting /home/.steam/share/steam/appcache/, deleting everything in /home/.steam/share/steam/ and nothing helped. I installed Ubuntu, then steam into it directly afterward. I installed steam from Ubuntu Software Centre, launched it, it updated 206MB, then closed. When I tried to launch it again, it momentarily flashes the checking for update dialogue, then closes every time. Then (in an unrelated event) Ubuntu said some system updates are necessary and one of them was Steam launcher. I did the update, tried to launch Steam; same story. Really need help on this, as I did a complete re-isntall of Ubuntu, then Steam again and it did not help at all. Here's the log: user@computer:~$ steam Running Steam on ubuntu 13.10 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) unlinked 0 orphaned pipes removing stale semaphore last operated on by process 2297 with name 0eBlobRegistryMutex_313E4D748EE12691A95DDE8913185F7E removing stale semaphore last operated on by process 2297 with name 0eBlobRegistrySignal_313E4D748EE12691A95DDE8913185F7E removing stale semaphore last operated on by process 2297 with name 0emSteamEngineInstance removing stale semaphore last operated on by process 2297 with name 0eSteamEngineLock Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "unity-gtk-module" Installing breakpad exception handler for appid(steam)/version(1381282832_client) Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig warning: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 78: saw unknown, expected number [1030/115016:WARNING:proxy_service.cc(958)] PAC support disabled because there is no system implementation Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) Installing breakpad exception handler for appid(steam)/version(1381282832_client) Steam: An X Error occurred X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 18 (X_ChangeProperty) Value in failed request: 0x0 Serial number of failed request: 105 xerror_handler: X failed, continuing Uploading dump (out-of-process) [proxy ''] /tmp/dumps/crash_20131030115012_1.dmp /home/user/.local/share/Steam/steam.sh: line 717: 2650 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" Finished uploading minidump (out-of-process): success = yes response: CrashID=bp-484ddae7-0b1c-4ae4-be84-42a9c2131030 Thanks in advance.

    Read the article

  • can router configuration cause decreasing of download rate?

    - by behrooz
    My download speed got crazy since I changed the router's IP. But nothing got fixed after doing a factory reset. The speed was 1024 kb/s (128 kB/s) but it is 200kb/s (max) right now. I mean it works good if a request is small (i.e. a HTTP request) but it gets slow if a request has a big response. Help me please. (It is three days I'm downloading VS2010.)

    Read the article

  • Apache, modifying response codes from 404 to 301

    - by user72539
    Hi, I'm running a magento installation on an apache server. There are many pages indexed in both google and linked to from external sites. I can't use 301 redirects in a .htaccess file as I can't be sure I will catch all the links. At the moment all requests are rewritten through magento and if a request isn't found magento returns a 404 File not Found. Is there a way of using one of the apache modules to filter the response* from magento and if a 404 Not Found is being sent back then replace the response with a standard 301 Redirect to the home page? E.g. Request to Magento -- Apache -- Rewrite to Magento index.php page -- page processed. Response if request exists -- return results (200) if request doesn't exist -- return 404 -- apache filter change response -- return 301 redirect to / I appreciate any help. Thanks, Jon as far as I am aware mod_rewrite is only used to rewrite requests and doesn't allow the modification of responses.

    Read the article

  • Diagnosing xmodmap errors

    - by intuited
    I'm getting this error when trying to use xmodmap to get rid of caps lock: $ xmodmap -e 'clear Lock' X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 118 (X_SetModifierMapping) Value in failed request: 0x17 Serial number of failed request: 8 Current serial number in output stream: 8 I'm running xfce on Maverick "10.10" Meercat. This problem did not occur before I added the Keyboard Layouts applet to a panel; before doing that, I was able to run my xmodmap script to swap Esc and CapsLock: !Remap Caps_Lock as Escape remove Lock = Caps_Lock keysym Caps_Lock = Escape It may be relevant that I chose alt-capslock as the keyboard switch combo in the Keyboard Layouts preferences. I've had a similar problem before, on a different machine, running openbox. On that machine, this problem started when I upgraded to Lucid, and has persisted in Maverick (release 10.10). I reported a bug in xorg. However, it remains unclear whether it's really a problem with xorg, or if I'm just doing something wrong with my configuration. Have other people experienced this problem? Can someone shed some light on what's going on here? It seems there are quite a few layers involved, and I don't understand any of them particularly well, so any information would be helpful. update I've discovered that the problem is specifically triggered by adding the Canada layout variant "Multilingual" (ca-multix). If I instead add the variant "Multilingual (first part)", the problem does not occur. I think this will probably end up being a usable workaround, but I don't yet know what the difference between these variants is. I've filed a freedesktop issue, and am commenting on a related ubuntu issue.

    Read the article

  • Adobe Coldfusion Railo OpenBD Apache Tomcat Multiple Sites

    - by chris hough
    Here's what I am trying to do, unless I am crazy: I am trying to use Tomcat with the multiple workers, so far I got OpenBD working, but having trouble with Railo, and will be tackling Adobe after. each engine deployed as a war separated by different workers I wanted to keep both the sites and engines inside my sites directory I have to remap the symlink for the WEB-INF when I switch engines = have not found a way around this my thought is to have everything separated into modules and I want to be able to execute both cfm and php code in a single site.  Ideally, it would be amazing if there would be a way to not have to remap the symlink as well. thoughts? can this be done? I am trying to mimic how this would be setup on a live server, not using eclipse for example. here is what I am working with so far: my apache workers.properties worker.list=openbd, openbdadmin, railo, railoadmin  worker.openbd.type=ajp13  worker.openbd.host=local.mydev.openbd  worker.openbd.port=8009 worker.openbdadmin.type=ajp13  worker.openbdadmin.host=local.admin.openbd worker.openbdadmin.port=8009   worker.railo.type=ajp13  worker.railo.host=local.mydev.railo  worker.railo.port=8009 worker.railoadmin.type=ajp13  worker.railoadmin.host=local.admin.railo worker.railoadmin.port=8009   my tomcat servers.xml < Host name="local.admin.openbd" appBase="/Users/[myusername]/Websites/coldfusion.engines"  unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false"        < Context path="" docBase="openbd/" reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host        < Host name="local.admin.railo"   appBase="/Users/[my username]/Websites/coldfusion.engines" unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false"        < Context path="" docBase="railo/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host < Host name="local.mydev.openbd"   appBase="/Users/[my username]/Websites/coldfusion.engines" unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false" < Context path="" docBase="/Users/[my username]/Websites/example.mydev/wwwroot/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true"< /Context < /Host < Host name="local.mydev.railo"   appBase="/Users/[my username]/Websites/coldfusion.engines"  unpackWARs="false" autoDeploy="true" xmlValidation="true" xmlNamespaceAware="false" < Context path="" docBase="/Users/[my username]/Websites/example.mydev/wwwroot/"  reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true" < /Host my apache vhosts ServerName local.admin.openbd DocumentRoot /Users/[my username]/Websites/coldfusion.engines/openBD/ #Mount OpenBD and tell it to only server cfml files JkMount /*.cfm openbdadmin ErrorLog "/Users/[my username]/Websites/apache.logs/local_openbdadmin_error.log" ServerName local.admin.railo DocumentRoot /Users/[my username]/Websites/coldfusion.engines/railo/ #Mount Railo and tell it to only server cfml files JkMount /*.cfm railoadmin ErrorLog "/Users/[my username]/Websites/apache.logs/local_railoadmin_error.log" ServerName local.mydev DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_error.log" ServerName local.mydev.openbd DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot #Mount OpenBD and tell it to only server cfml files JkMount /*.cfm openbd ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_openbd_error.log" ServerName local.mydev.railo DocumentRoot /Users/[my username]/Websites/example.mydev/wwwroot JkMount /*.cfm railo ErrorLog "/Users/[my username]/Websites/apache.logs/local_example_mydev_railo_error.log" my folder structure I am using websites/apache.logs/ websites/coldfusion.engines/ websites/coldfusion.engines/cfusion/ websites/coldfusion.engines/openBD/ websites/coldfusion.engines/railo/ websites/example.mydev/ websites/example.mydev/wwwroot/ websites/example.mydev/wwwroot/index.cfm   websites/example.mydev/wwwroot/index.htm   websites/example.mydev/wwwroot/index.php   error log output [Thu Aug 27 00:54:50.443 2009] [11279:2686719776] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Thu Aug 27 00:54:51.346 2009] [11280:2686719776] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (openbdadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:18.963 2009] [11284:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (openbdadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (openbdadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (openbdadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [error] ajp_service::jk_ajp_common.c (2466): (openbdadmin) connecting to tomcat failed. [Thu Aug 27 00:55:19.063 2009] [11284:2686719776] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=openbdadmin [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (railoadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:20.377 2009] [11283:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (railoadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] jk_open_socket::jk_connect.c (594): connect to 127.0.0.1:8009 failed (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (127.0.0.1:8009) (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [error] ajp_send_request::jk_ajp_common.c (1507): (railoadmin) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] ajp_service::jk_ajp_common.c (2447): (railoadmin) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [error] ajp_service::jk_ajp_common.c (2466): (railoadmin) connecting to tomcat failed. [Thu Aug 27 00:55:20.477 2009] [11283:2686719776] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=railoadmin

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >