Search Results

Search found 5067 results on 203 pages for 'squid deb proxy'.

Page 11/203 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Java.lang.reflext.Proxy returning another proxy from invocation results in ClassCastException on ass

    - by matao
    So I'm playing with geotools and I thought I'd proxy one of their data-access classes and trace how it was being used in their code. I coded up a dynamic proxy and wrapped a FeatureSource (interface) in it and off it went happily. Then I wanted to look at some of the transitive objects returned by the featureSource as well, since the main thing a FeatureSource does is return a FeatureCollection (FeatureSource is analogous to a sql DataSource and featurecollection to an sql statement). in my invocationhandler I just passed the call through to the underlying object, printing out the target class/method/args and result as I went, but for calls that returned a FeatureCollection (another interface), I wrapped that object in my proxy (the same class but a new instance, shouldn't matter should it?) and returned it. BAM! Classcast exception: java.lang.ClassCastException: $Proxy5 cannot be cast to org.geotools.feature.FeatureCollection at $Proxy4.getFeatures(Unknown Source) at MyClass.myTestMethod(MyClass.java:295) the calling code: FeatureSource<SimpleFeatureType, SimpleFeature> featureSource = ... // create the FS featureSource = (FeatureSource<SimpleFeatureType, SimpleFeature>) FeatureSourceProxy.newInstance(featureSource, features); featureSource.getBounds();// ok featureSource.getSupportedHints();// ok DefaultQuery query1 = new DefaultQuery(DefaultQuery.ALL); FeatureCollection<SimpleFeatureType, SimpleFeature> results = featureSource.getFeatures(query1); //<- explosion here the Proxy: public class FeatureSourceProxy implements java.lang.reflect.InvocationHandler { private Object target; private List<SimpleFeature> features; public static Object newInstance(Object obj, List<SimpleFeature> features) { return java.lang.reflect.Proxy.newProxyInstance( obj.getClass().getClassLoader(), obj.getClass().getInterfaces(), new FeatureSourceProxy(obj, features) ); } private FeatureSourceProxy(Object obj, List<SimpleFeature> features) { this.target = obj; this.features = features; } public Object invoke(Object proxy, Method m, Object[] args)throws Throwable{ Object result = null; try { if("getFeatures".equals(m.getName())){ result = interceptGetFeatures(m, args); } else{ result = m.invoke(target, args); } } catch (Exception e) { throw new RuntimeException("unexpected invocation exception: " + e.getMessage(), e); } return result; } private Object interceptGetFeatures(Method m, Object[] args) throws Exception{ return newInstance(m.invoke(target, args), features); } } Is it possible to dynamically return proxies of interfaces from a proxied interface or am I doing something wrong? cheers!

    Read the article

  • Squid, NTLM, Windows 7 and IE8

    - by Harley
    I'm running Squid 2.7-stable4, Samba 3 and the Windows 7 RC with IE8. I have NTLM authentication setup on my squid proxy server and it works fine for every combination of browser and Windows (including IE8 on XP and Firefox on Win7), but it doesn't work (keeps asking for authentication) for IE8 on Windows 7. I can get it to work using the LmCompatibilityLevel registry hack, but I'd really prefer to get it working on the server. Does anyone have any experience with this? Or know where to start looking? The samba logs don't reveal much. EDIT: Here's what the wb-MYDOMAIN log says when I attempt to authenticate: [2009/08/20 15:13:36, 4] nsswitch/winbindd_dual.c:fork_domain_child(1080) child daemon request 13 [2009/08/20 15:13:36, 10] nsswitch/winbindd_dual.c:child_process_request(478) process_request: request fn AUTH_CRAP [2009/08/20 15:13:36, 3] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1755) [ 4127]: pam auth crap domain: MYDOMAIN user: MYUSER [2009/08/20 15:13:36, 0] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1767) winbindd_pam_auth_crap: invalid password length 24/282 [2009/08/20 15:13:36, 2] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1931) NTLM CRAP authentication for user [MYDOMAIN]\[MYUSER] returned NT_STATUS_INVALID_PARAMETER (PAM: 4) [2009/08/20 15:13:36, 10] nsswitch/winbindd_cache.c:cache_store_response(2267) Storing response for pid 4547, len 3240

    Read the article

  • Squid parent cache for text/html only

    - by Salvador
    How do I configure the squid to only request text/html to the parent cache; right now I am using : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest on the second hand I get a lot of direct request that do not use the parent proxy: some queries go like FIRST_UP_PARENT and some like DIRECT, how do I tell the squid to always use parent for text/html BTW .. is a transparent proxy I have tried : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest acl elhtml req_mime_type -i ^text/html$ acl elhtml req_mime_type -i text/html cache_peer_access 127.0.0.1 allow elhtml cache_peer_access 127.0.0.1 deny all and it does not works Thanks in advance for the help.

    Read the article

  • squid running out of sockets

    - by drscroogemcduck
    I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine. if i run the following command where port 82 is the port my squid server is running on: netstat -ann | grep 82 | wc -l i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.

    Read the article

  • Using an APT proxy for downloads during installation

    - by intuited
    During system installation from a Desktop LiveCD (10.10) I checked the "Download updates during installation" option. Before starting the install I had configured an apt proxy server; this had been used correctly for my various package installs prior to launching the system installation GUI. However, the downloads taking place during the installation are not using the proxy. Is there a way to force usage of an APT proxy during installation?

    Read the article

  • Proxy setttings not working in Ubuntu 11.10

    - by Prakash
    I always use a proxy to connect to the internet in Windows as well as in previous versions of Ubuntu, but in Ubuntu 11.10 it seems that the proxy settings I set in the GUI are not being applied system-wide. I can access the internet in Firefox with its own proxy settings. I can access internet in the terminal (i.e. I am able to use apt-get for some small programs). But in Ubuntu software centre or update manager it is not working.

    Read the article

  • How do I specify a string to display in the Ubuntu software-center when creating a package with dpkg-deb?

    - by TomMKV
    When I open *.deb packages downloaded from the internet in the Ubuntu software-center, it displays a "nice" name for the package (including upper- and lowercase, spaces, special characters, ...). When I create a *.deb package from binaries only using dpkg-deb -b, Ubuntu Software Center displays the "technical" package name (the one specified at the Package: field in the control file, limited to lowercase only, no spaces, ...). Is there any way to provide a string different from the "technical" package name (including upper- and lowercase, spaces, special characters, ...) for display in the Ubuntu software Center? Unfortunately, this can not be done via the short description (that is displayed below the "technical" name, but not replacing it).

    Read the article

  • How do I set a system wide proxy?

    - by armando Armandiano
    I'm trying to set a system wide proxy, and I'm specifically having difficulties with apt-get for installing applications on my Ubuntu. I'm in a university using a proxy server with username/password. I'm aware of setting a proxy with username and password in the following manner: http://username:[email protected]:8080/ But it fails, as a critical example with apt-get. Username contains backslash( \ ) in it and I'm wondering whether that could be a problem for failing. I'd be grateful with any input on this. P.S I'm on Ubuntu 11.04

    Read the article

  • Can using an apt proxy (d-i mirror/http/proxy string http://mymirror) affect the installation of a .deb?

    - by Randolph
    I have been doing Ubuntu deployment using a preseed.cfg. After becoming comfortable with the packages being installed it was time to reduce download time and internet traffic by creating a mirror. I ended up doing a "partial mirror" using apt-cacher-ng and preseeding it by adding d-i mirror/http/proxy string http://mymirror to the preseed.cfg. This is where things got strange. I have a few .debs that I run as part of preseed/late_command by wgetting them and installing them with dpkg -i. The packages were installing without issue until added the proxy. With the proxy they fail to install. So does the proxy affect installing .debs during preseeded installation?

    Read the article

  • ISA Proxy server

    - by user59931
    Have a proxy at work that runs Microsoft ISA. i used to be able to connect using 11.10 with firefox no problem at all. i could either put the settings in firefox or the settings in ubuntu network proxy settings. this would give me a connection no problem ( slow due to the work network being really lame) since i upgraded with 12.10 firefox just crashes if i have any proxy settings (manual added the proxy settings). if i connect to a diffrent network without the proxy settings it works fine and doesn't crash i tried chrome to see if that would work... simular problem. chrome doesn't crash but is so slow it just times out all the time and can take 10min for a page to load.... not really sure where to go with this? i have tried a clean install of 12.04 on 2 diffrent computers and also both tried just upgrading from 11.10. Only answer i can see at the moment is role back to 11.10 :( i have tried all sorts like turning of IPv6 to see if that would make any diffrence but no joy... really am lost now. whats weird is the repositys are also really really slow through 12.04. 50 megs took an hour to download (ISA server has Ubuntu rep servers enables without authenication). really am lost

    Read the article

  • Proxy authentication box not showing (sometimes)

    - by zerologiko
    I'm behind a proxy that require authentication by means of user/pass. I'm using Ubuntu 11.04, I think the proxy is "Squid". Usually everything works fine, that means that the browser shows me the window to insert proxy user/pass and I can navigate. The problem: Sometimes the browser refuse to show the authentication windows and if I wait enough it gives me an error like: Errore 130 (net::ERR_PROXY_CONNECTION_FAILED) Even if I restart, disconnect, the situation doesn't change. BUT on Windows the network (and the proxy) works.. So, bottomline what I know is: the network is working (though only on Windows) the Ubuntu configuration is ok (because i can navigate most days) The problems resolves itself in a few hours but I don't understand why. Some hints? Thanks in advance! Andrea

    Read the article

  • Proxy Client for Ubuntu

    - by WindowsEscapist
    I want to use a proxy for web browsing similar to Ultrasurf for Windows. I've tried to use TOR, but it isn't working! The problem is whenever I search something along the lines of "ubuntu + linux proxy", sites assume that I want to set up a proxy server rather than use one. I just want something with little to no configuration needed (i.e. I don't have my own proxy server). UltraSurf is a free software which enables users inside countries with heavy Internet censorship to visit any public web sites in the world safely and freely. Users in countries without internet censorship also use it to protect their internet privacy and security.

    Read the article

  • 407 Proxy Authentication Required

    - by user38507
    When I try to install a software using Ubuntu Software center I get: Failed to download repository information Check Your Internet connection When I try to do a apt get-install something, I get: 407 Proxy Authentication Required I use a proxy server that requires a user-name and a password. I have set my systems proxy manually, by plugging in the required numbers in the Networks proxy and applied it system wide. I guess the problem now is plugging in my user-name and password. When I use INTERNET via Mozilla, it specifically asks me for my user-name and password.

    Read the article

  • permission denied: /etc/apt/sources.list

    - by Eli
    I'm trying to install java jre, i usually do it like this sudo echo 'deb http://www.duinsoft.nl/pkg debs all' >> /etc/apt/sources.list sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 5CB26B26 sudo apt-get update sudo apt-get install update-sun-jre exit but when i do sudo echo 'deb http://www.duinsoft.nl/pkg debs all' >> /etc/apt/sources.list i see permission denied: /etc/apt/sources.list When i do ls -l /etc/apt/sources.list i see -rw-r--r-- 1 root root 3360 Aug 26 01:45 /etc/apt/sources.list When i do sudo mv /etc/apt/sources.list /etc/apt/sources.list.old sudo cat /etc/apt/sources.list.old | sudo tee /etc/apt/sources.list i see #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://lb.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://lb.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise universe deb-src http://lb.archive.ubuntu.com/ubuntu/ precise universe deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise multiverse deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main and the issue is not solved, i still see that permission error, I'm on a 64 bit laptop

    Read the article

  • Best way to cache apt downloads on a LAN?

    - by Ken Simon
    I have multiple Ubuntu machines at home and a pretty slow internet connection, and sometimes multiple machines need to be updated at once (especially during new Ubuntu releases.) Is there a way where only one of my machines needs to download the packages, and the other machines can use the first machine to get the debs? Does it involve setting up my own local mirror? Or a proxy server? Or can it be made simpler?

    Read the article

  • Enterprise IPv6 Migration - End of proxypac ? Start of Point-to-Point ? +10K users

    - by Yohann
    Let's start with a diagram : We can see a "typical" IPv4 company network with : An Internet acces through a proxy An "Others companys" access through an dedicated proxy A direct access to local resources All computers have a proxy.pac file that indicates which proxy to use or whether to connect directly. Computers have access to just a local DNS (no name resolution for google.com for example.) By the way ... The company does not respect the RFC1918 internally and uses public addresses! (historical reason). The use of internet proxy explicitly makes it possible to not to have problem. What if we would migrate to IPv6? Step 1 : IPv6 internet access Internet access in IPv6 is easy. Indeed, just connect the proxy in Internet IPv4 and IPv6. There is nothing to do in internal network : Step 2 : IPv6 AND IPv4 in internal network And why not full IPv6 network directly? Because there is always the old servers that are not compatible IPv6 .. Option 1 : Same architecture as in IPv4 with a proxy pac This is probably the easiest solution. But is this the best? I think the transition to IPv6 is an opportunity not to bother with this proxy pac! Option 2 : New architecture with transparent proxy, whithout proxypac, recursive DNS Oh yes! In this new architecture, we have: Explicit Internet Proxy becomes a Transparent Internet Proxy Local DNS becomes a Normal Recursive DNS + authorative for local domains No proxypac Explicit Company Proxy becomes a Transparent Company Proxy Routing Internal Routers reditect IP of appx.ext.example.com to Company Proxy. The default gateway is the Transparent Internet proxy. Questions What do you think of this architecture IPv6? This architecture will reveal the IP addresses of our internal network but it is protected by firewalls. Is this a real big problem? Should we keep the explicit use of a proxy? -How would you make for this migration scenario? -And you, how do you do in your company? Thanks! Feel free to edit my post to make it better.

    Read the article

  • Windows 8 Internet Explorer 11 proxy automation script

    - by Stefan Bollmann
    Similar to this post, I'd like to change my proxy settings using a script. However, it fails. When I am behind the proxy, IE does not connect to the internet. Here I try the first solution from craig: function FindProxyForURL(url, host) { if (isInNet(myIpAddress(), "myactualip", "myactualsubnetip")) return "PROXY proxyasshowninpicture:portihavetouseforthisproxy_see_picture"; else return "DIRECT"; } This script is saved as proxy.pac in c:\windows and my configuration is* in LAN settings: No automatically detected settings, yes, use automatic config script: file://c:/windows/proxy.pac No proxy server. So, what am I doing wrong? ---------------- update -------------- However, when I set up a proxy in my LAN configurations: IE -> Internet Options -> Connections -> LAN Settings check: Use a proxy Server for your LAN Address: <a pingable proxy> Port: <portnr> everything is fine for this environment. Now I try a simpler script like function FindProxyForURL(url, host) { return "PROXY <pingable proxy>:<portnr>; DIRECT"; } With a configuration described above** I am not able to get through the proxy.

    Read the article

  • Squid refresh_pattern won't cache "Expires: ..."

    - by Marcelo Cantos
    Background I frequent the OpenGL ES documentation site at http://www.khronos.org/opengles/sdk/1.1/docs/man/. Even though the content is completely static, it seems to force a reload on every single page I visit, which is very annoying. I have a squid 3.0 proxy set up (apt-get install squid3 on Ubuntu 10.04), and I added a refresh_pattern to force the pages to cache: refresh_pattern ^http://www.khronos.org/opengles/sdk/1\.1/docs/man/ … 1440 20% 10080 … override-expire ignore-reload ignore-no-cache ignore-private ignore-no-store This is all on one line, of course. While this appears to work for the XHTML documents (e.g., glBindTexture), it fails to cache the linked content, such as the DTD, some .ent files (?) and some XSL files. The delay in fetching these extra files delays rendering of the main document, so my principal annoyance isn't fixed. The only difference I can glean with these ancillary files is that they come with an Expires: header set to the current time, whereas the XHTML document has none. But I would have expected the override-expire option to fix this. I have confirmed that documents have the same base URL. I have also truncated the pattern to varying degrees, with no effect. My questions Why does the override-expire option not seem to work? Is there a simple way to tell squid to unconditionally cache a document, no matter what it finds in the response headers? (Hopefully) relevant output cache.log Jan 01 10:33:30 1970/06/25 21:18:27| Processing Configuration File: /etc/squid3/squid.conf (depth 0) Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'override-expire' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-reload' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-cache' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-store' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-private' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| DNS Socket created at 0.0.0.0, port 37082, FD 10 Jan 01 10:33:30 1970/06/25 21:18:27| Adding nameserver 192.168.1.1 from /etc/resolv.conf Jan 01 10:33:30 1970/06/25 21:18:27| Accepting HTTP connections at 0.0.0.0, port 3128, FD 11. Jan 01 10:33:30 1970/06/25 21:18:27| Accepting ICP messages at 0.0.0.0, port 3130, FD 13. Jan 01 10:33:30 1970/06/25 21:18:27| HTCP Disabled. Jan 01 10:33:30 1970/06/25 21:18:27| Loaded Icons. Jan 01 10:33:30 1970/06/25 21:18:27| Ready to serve requests. access.log Jun 25 21:19:35 2010.710 0 192.168.1.50 TCP_MEM_HIT/200 2452 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/glBindTexture.xml - NONE/- text/xml Jun 25 21:19:36 2010.263 543 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.276 556 192.168.1.50 TCP_MISS/304 370 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.666 278 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.958 279 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.251 276 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl - NONE/- text/xml Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl - NONE/- text/xml store.log Jun 25 21:19:36 2010.263 RELEASE -1 FFFFFFFF D3056C09B42659631A65A08F97794E45 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd Jun 25 21:19:36 2010.276 RELEASE -1 FFFFFFFF 9BF7F37442FD84DD0AC0479E38329E3C 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl Jun 25 21:19:36 2010.666 RELEASE -1 FFFFFFFF 7BCFCE88EC91578C8E2589CB6310B3A1 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent Jun 25 21:19:36 2010.958 RELEASE -1 FFFFFFFF ECF1B24E437CFAA08A2785AA31A042A0 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent Jun 25 21:19:37 2010.251 RELEASE -1 FFFFFFFF 36FE3D76C80F0106E6E9F3B7DCE924FA 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF A33E5A5CCA2BFA059C0FA25163485192 304 1277462871 1221139523 1277462871 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF E2CF8854443275755915346052ACE14E 304 1277462872 1221139523 1277462872 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl

    Read the article

  • is there a way using Ruby's net/http to post form data to an http proxy?

    - by Derek P.
    I have a basic Squid server setup and I am trying to use Ruby's Net::HTTP::Proxy class to send a POST of form data to a specified HTTP endpoint. I assumed I could do the following: Net::HTTP::Proxy(my_host, my_port).start(url.host) do |h| req = Net::HTTP::Post.new(url.path) req.form_data = { "xml" => xml } h.request(req) end But, alas, proxy vs. non-proxied Net::HTTP classes don't seem to use the proxy IP Address. my remote service responds telling me that it received a request from the wrong IP address, ie: not the proxy. I am looking for a specific way to write the procedure, so that I can successfully send a form post via a proxy. Help? :)

    Read the article

  • Squid allow both authenticated and ip based logins

    - by Pete Sampras
    I have a proxy server which allows users to connect and authenticates them via user/pass but some of them are complaining about this method since not many of the available proxy software support that method so I wish to config my squid in such way that it allows members to connect via user/pass or based on their pc ip but I'm not sure if that could work. Anyone with an idea?

    Read the article

  • Nginx proxy upstream cached?

    - by Julian H. Lam
    Attempting to resolve an issue that's been annoying me for a bit. I've distilled the symptoms into a set of reproducible steps: I have two sites, siteA, and siteB. They are both Node.js applications running on different ports (for the sake of example, 4567 and 4568) Both applications have their own file in sites_available (plus a symlink from sites_enabled), which contain the directives proxy_pass http://node_siteA/ and proxy_pass http://node_siteB/ respectively, inside of a location block. They also each have an upstream block (defined globally?): upstream node_siteA { upstream node_siteB { server 127.0.0.1:4567; server 127.0.0.1:4568; } } Site A and Site B have nothing to do with each other. Yes, I am restarting (reloading, actually) nginx every time I make a change. If I take down site B and attempt to access it via the web, I am served site A. Why is this? Thoughts Other times, when I create a new Site C, for example, nginx refuses to show me anything except "Welcome to nginx!" for ~5 minutes. This suggests a resolver timeout, perhaps? When I access Site B after its config has been deleted, and it sends me to Site A, this sounds like nginx sending me to servers in a round-robin fashion...

    Read the article

  • Problem with squid log files

    - by Gatura
    I am using SARG to get a report on the squid log files, I get this result /usr/local/Sarg/bin/sarg -l /usr/local/squid/var/logs/access.log SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% sort: open failed: +6.5nr: No such file or directory SARG: (index) Cannot open file: /Applications/Sarg/reports/index.sort SARG: Records in file: 0, reading: 0.00% What could be the problem?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >