Search Results

Search found 52885 results on 2116 pages for 'http redirect'.

Page 800/2116 | < Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >

  • Lighttpd + django on gentoo 10 seconds to answer

    - by plaetzchen
    I want to run a Django site on a lighttpd with fastcgi on a gentoo machine. Everytime I try to access the site I get a response after more or less exactly 10 seconds. Im using a socket to let lighttpd communicate with my Django site, but a tcp port doesn't help either. Could this be a lighttpd problem? I tried to both from a server in the internet as well as from localost, this is what lighttpd gives me in the error.log 2012-07-10 14:36:36: (response.c.300) -- splitting Request-URI 2012-07-10 14:36:36: (response.c.301) Request-URI : / 2012-07-10 14:36:36: (response.c.302) URI-scheme : http 2012-07-10 14:36:36: (response.c.303) URI-authority: owntube 2012-07-10 14:36:36: (response.c.304) URI-path : / 2012-07-10 14:36:36: (response.c.305) URI-query : 2012-07-10 14:36:36: (response.c.300) -- splitting Request-URI 2012-07-10 14:36:36: (response.c.301) Request-URI : /owntube.fcgi/ 2012-07-10 14:36:36: (response.c.302) URI-scheme : http 2012-07-10 14:36:36: (response.c.303) URI-authority: owntube 2012-07-10 14:36:36: (response.c.304) URI-path : /owntube.fcgi/ 2012-07-10 14:36:36: (response.c.305) URI-query : 2012-07-10 14:36:36: (response.c.349) -- sanatising URI 2012-07-10 14:36:36: (response.c.350) URI-path : /owntube.fcgi/ 2012-07-10 14:36:36: (mod_access.c.135) -- mod_access_uri_handler called 2012-07-10 14:36:36: (mod_fastcgi.c.3632) handling it in mod_fastcgi 2012-07-10 14:36:36: (response.c.470) -- before doc_root 2012-07-10 14:36:36: (response.c.471) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.472) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.473) Path : 2012-07-10 14:36:36: (response.c.521) -- after doc_root 2012-07-10 14:36:36: (response.c.522) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.523) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.524) Path : /var/www/owntube/owntube.fcgi 2012-07-10 14:36:36: (response.c.541) -- logical -> physical 2012-07-10 14:36:36: (response.c.542) Doc-Root : /var/www/owntube 2012-07-10 14:36:36: (response.c.543) Rel-Path : /owntube.fcgi 2012-07-10 14:36:36: (response.c.544) Path : /var/www/owntube/owntube.fcgi

    Read the article

  • Setting Remote Desktop to allows IPv6 connections

    - by Garrett
    Setup: Basically I have 3 machines (2 virtual and 1 physical) that I would like to be able to RDP in to from outside my NAT (a router). The VMs are Windows 7 and Windows XP, both fully patched with Teredo installed and working, both running in VirtualBox (their host also has Teredo working, though I'm not sure if that matters). They both have bridged network adapters with promiscuous mode enabled. The physical machine is Windows 7 fully patched with an HFS server running on it and a dynamic DNS set up for my public IPv4 address and port forwarded. It also has Teredo installed and working. Symptoms: According to http://test-ipv6.com/ all 3 have public IPv6 addresses, and they can all connect to http://ipv6.google.com/. I can ping the XP VM from the host it's running on but I cannot ping it from any other machine. Also, I cannot ping either of the other machines from anywhere. I cannot connect to any of them over RDP from IPv6, however I can connect to all of them through IPv4. Any ideas what is going wrong?

    Read the article

  • Tracking a subdomain serately within the main domain account [closed]

    - by Vinay
    I have a website, for example: xyz.com and info.xyz.com. I created a profile for xyz and tracking is good. I added a new profile for info.xyz.com in xyz.com. Analytics tracking for info.xyz.com is showing traffic from both xyz.com and info.xyz.com. How do I change to show only info.xyz traffic in the info.xyz.com profile. I used the following code: Analytics code for xyz.com domain: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Analytics code for info.xyz.com <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_setDomainName', 'xyz.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Cisco ASA site-to-site vpn not initiating phase 1 (not sending udp 500 packets)

    - by Sean Steadman
    I am hoping someone here can help me with my problem. I am trying to setup an IPSEC site-to-site VPN between two cisco ASA 5520's in GNS3 (both using 8.4.2). I have been unsuccesful in getting the tunnel up and it appears neither ASA is sending packets out,in regards to phase 1 and phase 2 (tested by using wireshark and seeing NO udp 500 packets). Doing show ipsec sa and such shows nothing. CALIFORNIA(config)# show ipsec sa There are no ipsec sas FLA-ASA# show ipsec sa There are no ipsec sas I will attach both configurations in two different pastebin files as to keep this post a bit cleaner. Essentially California side has 172.20.1.0/24 and Florida side has 10.10.10.0/24. California ASA config: http://pastebin.com/v0pngYzF Florida ASA config: http://pastebin.com/E2geybta Please let me know if there is any other vital information that could help. I have gotten IPSEC tunnels to work using openSwan (linux) and cisco routers but cannot for the life of me get ASA IPSEC tunnels to work. The ASDM is out of the question I only use cli. Thanks for any useful help!

    Read the article

  • Netgear router is not resolving hostnames

    - by Thomas Clayson
    Not sure what the problem is, but I am used to being able to access other clients on my LAN via their hostname. We got a new router from plusnet (Netgear WNR1000v3) and now this has stopped working. For instance, if I were to run a web server from one computer with the hostname TomPC usually I could go into a browser and type http://TomPC and I would get the web server front page. I can access it by using the LAN ip of the machine. e.g. http://192.168.1.1 works fine. I thought it would be a simple option in the admin panel of the router - possibly I had to turn on DNS/DCHP or something. But I can't see anything, and my searches on the internet seem to turn up ridiculous solutions involving setting up a dedicated DNS server on my LAN. This is just a small home network - really I just want to be able to access my Raspberry PI on the network without having to work out the IP address every time. (I know I could just set a static IP address for it, but using its hostname would be much easier). DCHP is enabled in the LAN settings part, although RIP is disabled (I don't know what the latter is) I don't know if this has anything to do with it? Many thanks!

    Read the article

  • Apache mod_proxy, how to forward request into local network ip(server)

    - by Beck
    Can't figure out, how to configure mod_proxy for this. I have two domains, one is working fine at the moment. Second is bind to the same ip. I need to forward requests from second domain to another server in local network. like that: domain1.com => 192.168.1.101 domain2.com => 192.168.1.102 What configuration or directives i should use? Thanks ;) Update <VirtualHost *:80> ServerName www.domain2.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://192.168.1.103:8080/ ProxyPassReverse / http://192.168.1.103:8080/ </VirtualHost> It just doesn't redirect to second server. That's it. And when i restart apache, it says something with overlapping 80 port. [warn] _default_ VirtualHost overlap on port 80, the first has precedence

    Read the article

  • Anonymous Login attemps from IPs all over Asia, how do I stop them from being able to do this?

    - by Ryan
    We had a successful hack attempt from Russia and one of our servers was used as a staging ground for further attacks, actually somehow they managed to get access to a Windows account called 'services'. I took that server offline as it was our SMTP server and no longer need it (3rd party system in place now). Now some of our other servers are having these ANONYMOUS LOGIN attempts in the Event Viewer that have IP addresses coming from China, Romania, Italy (I guess there's some Europe in there too)... I don't know what these people want but they just keep hitting the server. How can I prevent this? I don't want our servers compromised again, last time our host took our entire hardware node off of the network because it was attacking other systems, causing our services to go down which is really bad. How can I prevent these strange IP addresses from trying to access my servers? They are Windows Server 2003 R2 Enterprise 'containers' (virtual machines) running on a Parallels Virtuozzo HW node, if that makes a difference. I can configure each machine individually as if it were it's own server of course... UPDATE: New login attempts still happening, now these ones are tracing back to Ukraine... WTF.. here is the Event: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB4FEB30C) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: REANIMAT-328817 Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 94.179.189.117 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Here is one from France I found too: Event Type: Success Audit Event Source: Security Event Category: Logon/Logoff Event ID: 540 Date: 1/20/2011 Time: 11:09:50 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: QA Description: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB35D8539) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: COMPUTER Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 82.238.39.154 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Windows 7 - All Icons Missing, Explorer Progress Bar Never Finishes, Libraries Gone

    - by Alex
    since yesterday i've had three issues which all arose at the same time. windows 7 x64, i7 2.8ghz 12gb ddr3 1 - my libraries, favorites, drives are missing...basically the entire sidebar is gone. http://i.imgur.com/m8pRQ.png. yet when i open a dialog, my libraries and drives are back to normal ONLY for the dialog. i tried Restore Default Libraries. it works one time, but when i open libraries again i go back to the empty mess. restarting the computer temporarily fixes the problem. 2 - in the explorer window that's showing libraries, when i navigate to a certain folder i get an unending progress bar (the kind that turns the address bar green). yesterday when the problem started, i was saving a file to this folder. the program writing the file crashed during the write and i believe that's what caused the problem. i have sugarsync backing up that folder and when i restarted the computer sugarsync informed me that its database was corrupted, so i had to uninstall and reinstall the software. 3 - icons are missing. the Rebuild Icon Cache did not fix this. http://i.imgur.com/r9pgo.png restarting the computer temporarily fixes these problems, but when i open the directory with the initial write problem, everything stops working. edit: i should note that i did a chkdsk /f and it repaired problems. i also did the thing that verifies then restores windows files (can't remember the command now), which reported that everything was normal.

    Read the article

  • Nginx order of servers

    - by scrat
    I have 3 sites on my server. All are running on gunicorn and use unix sockets to communicate with nginx which routes requests. I got three records in nginx.conf like: server { listen 80; server_name site1.com; location / { proxy_pass http://unix:/tmp/site1.sock; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } For site1, site2, site3. If they are ordered as config for site1 goes first, and then goes config for site2 and site3 everything works good. But when I change the order for example to site2, site1, site3, then site1 becomes routed to site2. What am I doing wrong? Full server nginx.conf before servers configs: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon;

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • repeated entries in website log file

    - by Reza
    I am writing an ad hoc log analyser for my website log file. The following is part of the log file in which it shows file1.pdf has been downloaded twice. Looking carefully, the time stamp and IP address are exactly the same in both entries. How can it be possible to have 2 downloads at the same time by the same person. Should I count it as 2 in my programme or as 1? Any reply is appreciated. name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 3706 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)" name_of_subdomain xxx.xxx.xx.xx - - [02/Apr/2012:09:13:31 +0100] "GET /file1.pdf HTTP/1.1" 206 425462 "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; CMDTDF)"

    Read the article

  • DVI splitter not working as expected/confusion between DVI-D and -I

    - by Freakishly
    Hey guys, thanks for looking. I have an ATI FirePro™ V3700 in my desktop machine, and I have been running a dual-monitor setup quite effortlessly, thanks to the two DVI ports on the card. I came upon a third monitor, and wanted to extend my desktop to 3 screens, so I purchased a DVI splitter from Amazon. Now, I can only duplicate the second monitor onto the third, not extend it. I've tried all possible combinations of input to no avail. Here's the setup: The ATI FirePro™ V3700 has two Dual-Link DVI-I outputs The splitter splits a single Dual-Link DVI-I port into two Dual-Link DVI-I outputs Two of the monitors are NEC E222W, and the third monitor is a Dell 2001FP. Each monitor has one D-Sub and one Dual-Link DVI-D input. Cables going from the video card to the monitors are two Dual-Link DVI-D to the NECs and one Single-Link DVI-D to the Dell. Is the problem likely with the DVI-D/DVI-I mismatch? Or is it with the cable on the Dell that is only a Single-Link? The cables are easily replaceable, the monitors not so much. Thanks for your time, I really appreciate it. http://www.amd.com/us/products/workstation/graphics/ati-firepro-3d/v3700/Pages/v3700-specs.aspx http://www.amazon.com/Cables-Unlimited-DVI-D-Splitter-PCM-2260/product-reviews/B000H09RFM/ref=dp_top_cm_cr_acr_txt?ie=UTF8&showViewpoints=1 www dot newegg dot com/Product/Product.aspx?Item=N82E16824002495 accessories dot us dot dell dot com/sna/PopupProductDetail.aspx?cs=19&l=en&c=us&sku=320-1578 Apologies for the fudged links, I'm new here and they won't let me post more than two :P

    Read the article

  • Users getting 'flooded' with not read notifications (NRNs) for old emails and meeting requests

    - by Exile
    I'm being placed under quite a lot of pressure from senior management over a relatively trivial issue. Basically the vast majority of users are complaining that they receive not read notifications (NRNs) for old emails and meeting requests in large numbers multiple times a day. I know something strange is happening because some are delivered at silly times in the morning (i.e 3AM or 4AM). The problem I have is that these some of these NRNs are from meeting requests and messages that are 120 days old, so some users have deleted the original message so I don’t actually know if the NRN is from an email or meeting request. This is typical of what users receive as a NRN: From: Sender Sent: 23 March 2012 04:16 To: Recepient Subject: Not read: Accepted: Status update Your message To: Sender Subject: Accepted: Status update Sent: Wednesday, November 23, 2011 8:59:00 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Friday, March 23, 2012 4:15:32 AM (UTC) Dublin, Edinburgh, Lisbon, London. ... From: Sender Sent: 18 March 2012 01:13 To: Recepient Subject: Not read: Gold delivery - Sourcing module Your message To: Sender Subject: Gold delivery - Sourcing module Sent: Friday, November 18, 2011 9:37:58 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Sunday, March 18, 2012 1:12:37 AM (UTC) Dublin, Edinburgh, Lisbon, London. I have done a search and found the following: http://support.microsoft.com/kb/2544246 http://support.microsoft.com/kb/2471964 But we already installed 'Update Rollup 6 for Exchange Server 2010 Service Pack 1' back in December, so I am not sure what we can do to fix this?

    Read the article

  • Nginx, as reverse proxy, could not proxy_pass to a domain pointing to the local JBOSS

    - by larryzhao
    My environment is Ubuntu 12.04, Nginx 1.20, and Torquebox 2.0.3 which is actually JBoss AS 7. I have two app deployed on Torquebox, it listens to 8080 and have different hostnames, app1.mydomain.com and app2.mydomain.com. I added 127.0.0.1 app1.mydomain.com and 127.0.0.1 app2.mydomain.com in /etc/hosts then I curl app1.mydomain.com:8080 and curl app2.mydomain.com:8080 both have correct return. Then I go to my nginx. I would like nginx to pass the visit to www.app1.com to app1.mydomain.com:8080, so I have the following configuration: # primary server - proxypass to torquebox server { listen 80; server_name www.app1.com; access_log off; error_log off; # proxy to Torquebox location / { proxy_pass http://app1.mydomain:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } But it doesn't work. curl www.app1.com returns nothing. And if I visit www.app1.com in Safari, the http return code is 404. I don't know why, need help.

    Read the article

  • Ubuntu server apt-get says "(-5 - No address associated with hostname)"

    - by Srini
    I have a ubuntu 12.04 server. Running sudo apt-get update on it produces errors like this: W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Something wicked happened resolving 'au.archive.ubuntu.com:http' (-5 - No address associated with hostname) I am able to ping all the other hosts on the network and also Google's DNS 8.8.8.8. But am unable to ping www.google.com. So, I'm guessing something is wrong with my DNS setup, but not sure what. I use static IP and my /etc/network/interfaces looks like this: auto eth0 iface eth0 inet static address 192.168.1.50 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.0.255 gateway 192.168.1.1 #dns-nameserver 203.12.160.35 203.12.160.36 #nameserver 203.12.160.35 203.12.160.36 My /etc/resolv.conf and /etc/resolvconf/resolv.conf.d/base are both empty and my /etc/resolvconf/resolv.conf.d/original says: nameserver 192.168.1.1 Any help would be greatly appreciated. P.S. I've googled it a bit and the common resolution is to switch to DHCP which I don't want to do since this is my home server. Thanks Srini

    Read the article

  • How do you set max execution time of PHP's CLI component?

    - by cwd
    How do you set max execution time of PHP's CLI component? I have a CLI script that has gone into a infinite loop and I'm not sure how to kill it without restarting. I used quicksilver to launch it, so I can't press control+c at the command line. I tried running ps -A (show all processes) but php is not showing up in that list, so perhaps it has timed out on it's own - but how do you manually set the time limit? I tried to find information about where I should set the max_execution_time setting, I'm used to setting this for the version of PHP that runs with apache, but I have no idea where to set it for the version of PHP that lives in /usr/bin. I did see the follow quote, which does seem to be accurate (see screenshot below), but having an unlimited execution time doesn't seem like a good idea. Keep in mind that for CLI SAPI max_execution_time is hardcoded to 0. So it seems to be changed by ini_set or set_time_limit but it isn't, actually. The only references I've found to this strange decision are deep in bugtracker (http://bugs.php.net/37306) and in php.ini (comments for 'max_execution_time' directive). (via http://php.net/manual/en/function.set-time-limit.php) ini_set('max_execution_time') has no effect. I also tried the same thing and go the same result with set_time_limit(7).

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • How to make quicksilver remember custom trigger

    - by corroded
    I am trying to make a custom trigger for my shell/apple script file to run so I can just launch my dev environment at the push of a button. So basically: I have a shell script(and some apple script included) in ~ named start_server.sh which does 3 things: start up solr server start up memcached start up script/server I have a saved quicksilver command(.qs) that opens up start_server.sh(so start_server.sh, then the action is "Run in Terminal") I created a custom trigger that calls this saved qs command. I did that then tested it and it works. I then tried to double check it so I quit quicksilver and when I checked the triggers it just said: "Open (null)" as the action. I set the trigger again and when i restarted QS the same thing happened again. I don't know why but my old custom trigger to open terminal has worked since forever so why doesn't this one work? Here's a screenie of the triggers after I restart QS: http://grab.by/4XWW If you have any other suggestion on how to make a "push button" start for my server then please do so :) Thanks! As an added note, I have already tried the steps on this thread but to no avail: http://groups.google.com/group/blacktree-quicksilver/browse_thread/thread/7b65ecf6625f8989

    Read the article

  • I need a driver for my Cardbus USB 2.0 card

    - by Carl
    The picture and details of the card are here: http://www.ht-link.com/en/ProductView.asp?ID=106 The Windows drivers for this don't work right. I tried them on my previous laptop, and then installed the ones off their included CD. (Note that the systems requirements includes a CD-ROM drive...) I have a new laptop, and lost the CD. The website that lists the drivers is broken - the download links don't work. It is here: http://www.ht-link.com/en/DownView.asp?ID=10 I need the very first link - The Win XP drivers for the HT-112NEC. The company does not reply to my e-mails. I've tried searching Google for other sources for the driver (I didn't bother with those sites that want me to install driver detection software or create a log-in for their site). [Here's the problem I am getting using the drivers Windows XP SP3 installs: When I plug in my USB 2.0 hard-disk adapter, a USB Mass Storage Device entry is added in the Device Manager, but there is no entry under Disk Drives, and a drive letter is not assigned, so I can't access the driver. Like I said, this card didn't work without their special drivers on my other laptop, either.]

    Read the article

  • Server resolve issues not consistent

    - by bobthemac
    I am having some weird issues with my web server. It has a public ip address and is set-up on an openVZ virtual machine. Accessing in to the site works fine every time but when trying to access out from the server I can't always connect out. Sometimes I can connect out and resolve addresses, sometimes I can't. The issue is visible in both ssh when trying to do a wget command on Google; sometimes it works and I get the index.html page and sometimes I get nothing. The issue is more visible in wordpress where you can't view themes but after a few presses of the try again button you can then view them. I have searched google and found nothing about this issue. Does anyone here have any ideas what could be causing this strange behaviour? Ports 80 and 2222 are open for web and ssh. Failed 17:26:33.398412 IP 86.148.184.124.38445 > 176.9.36.252.http: Flags [.], ack 98383, win 632, options [nop,nop,TS val 3070086 ecr 323106946], length 0 [email protected]..|. $..-.P..,.e......x....... .....B8. Passed 17:30:00.179630 IP 146.90.206.241.50091 > 176.9.36.252.http: Flags [F.], seq 1, ack 1, win 115, options [nop,nop,TS val 13740559 ecr 323308537], length 0 [email protected]... $....P.w...x.....s(K..... .....EK. Thanks in advance

    Read the article

  • Can varnish cache files without specific extension or residing in specific directory

    - by pataroulis
    I have a varnish installation to cache (MANY) images that my service serves. It is about 200 images of around 4k per second and varnish happily serves them according to the following rule: if (req.request == "GET" && req.url ~ "\.(css|gif|jpg|jpeg|bmp|png|ico|img|tga|wmf)$") { remove req.http.cookie; return(lookup); } Now, the thing is that I recently added another service on the same server that creates thumbnails to serve but it does not add a specific extension. The files are of the following filename pattern: http://www.example.com/thumbnails/date-of-thumbnail/xxxxxxxxx.xx where xx are numbers, so xxxxxxxxx.xx could be 6482364283.73 (two numbers at the end) (actually this is the timestamp so I can keep extra info in the filename) That has the side effect that varnish does not cache them and I see them constantly being served by apache itself. Even though I can change the format from now on to create thumbs ending in .jpg, is there a way to change the vcl file of my varnish daemon to either cache everything under a directory (the thumbnails directory) or everything with two numbers at its extension? Let me know if I can provide any additional info ! Thanks!

    Read the article

  • Blocking of certain file downloads

    - by Philip Fourie
    I have a problem where I cannot completely download a certain file from a server. The file is 1.9MB in size but only 68% is downloaded and then it hangs. I tried and these cases, which failed: Downloaded the file with HTTP Downloaded the file with FTP Moved the file to different FTP and web servers behind the ISA firewall Tried with IIS 6.0 & IIS 7.0 Multiple download clients. Which included FireFox, FileZilla (on Windows) and wget (on Linux) This worked: Downloading other files from the same location on the server. Both bigger and smaller and in size than the original. FTP and HTTP worked. Earlier version of this file (.DLL) works. It is as if the content of this file has an influence on this file being served. Network architecture: Client Machine - Internet (ISP) - ISA Server - IIS 7.0 The only constants are the ISP, Cisco router and the ISA server. Is it possible that something is rejecting the download because of the contents of the file? I am hoping ISA is the culprit... I am not a ISA expert is there somewhere I can look to establish if it is indeed ISA causing this? Update: Splitting the file into two parts with a hex editor results in one half of the file being served correctly and the other part not. Zipping the file results in the file being downloaded successfully. However this is not an option for this particular scenario. Renaming the file and its extension also doesn't work. Update 2009/10/22: It does NOT seems to be ISA that is causing this problem. We connected a laptop (running IIS) on an available public IP and still the file download to 68% before it hanged. The two remaining components are the ISP and the Cisco 800 series router. Anyone knows about an issue on the router perhaps?

    Read the article

  • F5 BIG_IP persistence iRules applied but not affecting selected member

    - by zoli
    I have a virtual server. I have 2 iRules (see below) assigned to it as resources. From the server log it looks like that the rules are running and they select the correct member from the pool after persisting the session (as far as I can tell based on my log messages), but the requests are ultimately directed to somewhere else. Here's how both rules look like: when HTTP_RESPONSE { set sessionId [HTTP::header X-SessionId] if {$sessionId ne ""} { persist add uie $sessionId 3600 log local0.debug "Session persisted: <$sessionId> to <[persist lookup uie $sessionId]>" } } when HTTP_REQUEST { set sessionId [findstr [HTTP::path] "/session/" 9 /] if {$sessionId ne ""} { persist uie $sessionId set persistValue [persist lookup uie $sessionId] log local0.debug "Found persistence key <$sessionId> : <$persistValue>" } } According to the log messages from the rules, the proper balancer members are selected. Note: the two rules can not conflict, they are looking for different things in the path. Those two things never appear in the same path. Notes about the server: * The default load balancing method is RR. * There is no persistence profile assigned to the virtual server. I'm wondering if this should be adequate to enable the persistence, or alternatively, do I have to combine the 2 rules and create a persistence profile with them for the virtual server? Or is there something else that I have missed?

    Read the article

  • Desktop appliciations are unable to launch my browser in Windows 8

    - by Chevex
    I have a fresh copy of Windows 8 Pro installed from MSDN. I have Google Chrome installed (stable channel) and it is set as my default browser. I even went into Control Panel Default Programs to ensure that Chrome had all its defaults. When other desktop applications try to launch my browser they always fail. For example, while trying to install the Android SDK for Windows the installer accurately detected that I did not have the JDK installed. It provides a friendly button to visit java.oracle.com. When pressing this button, nothing happens at all. You can see that here: http://youtu.be/XXL8GhuWWg0 If it were only that application that was having issues I wouldn't think anything of it but I have been encountering similar issues all over the place. Probably the most irritating one is when visual studio has updates; clicking the update button does nothing. http://www.youtube.com/watch?v=zwd1mn3TId0 You can see in that screencast that Visual Studio is not able to launch the browser no matter what I click. The update button doesn't do anything and neither do the two links in the update's description. Any suggestions? I'm assuming it's a Windows issue since it is happening in multiple applications. UPDATE: Setting IE as the default browser fixes the issue. So it has something to do with it not being able to launch Chrome programmatically. Is it even possible to workaround this bug or do I have to suffer with IE as default for now?

    Read the article

< Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >