Search Results

Search found 50980 results on 2040 pages for 'http compression'.

Page 90/2040 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Why I get java.net.SocketException: Connection reset

    - by Jammy
    I need sent some requests to server side and get reponse, sometimes when I call specific method to run the flollowing common code, I get one error in line(addToCookieJar(connection);), any idea how this get happened? URL url = new URL(providerURL); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setDoInput(true); connection.setDoOutput(true); connection.setUseCaches(false); connection.setRequestProperty("Content-Type", "application/octet-stream"); // We understand gzip encoding connection.addRequestProperty("Accept-Encoding", "gzip"); if (cookie != null && cookieHandler != null) { connection.setRequestProperty("Cookie", cookie); } if (cookieHandler == null) { addFromCookieJar(connection); } // Send the request ObjectOutputStream oos = new ObjectOutputStream(connection.getOutputStream()); oos.writeObject(remote.getName()); oos.writeObject(m.getName()); // method name oos.writeObject(m.getParameterTypes()); // formal parameters oos.writeObject(args); // actual parameters oos.flush(); oos.close(); if (cookieHandler == null) { cookieJar.put(new URI(providerURL), connection.getHeaderFields()); } Exception: java.lang.reflect.UndeclaredThrowableException at $Proxy0.updateDocument(Unknown Source) at com.agst.ui.gantt.GanttPanel.doUpdateDocument(GanttPanel.java:1931) at com.agst.ui.gantt.GanttPanel.save(GanttPanel.java:1419) at com.agst.ui.gantt.GanttPanel$4.run(GanttPanel.java:1673) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketException: Connection reset at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection$6.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.net.www.protocol.http.HttpURLConnection.getChainedException(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at com.agst.rmi.RemoteCallHandler.call(RemoteCallHandler.java:196) at com.agst.rmi.RemoteCallHandler.invoke(RemoteCallHandler.java:142) ... 5 more Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getHeaderFields(Unknown Source) at com.agst.rmi.RemoteCallHandler.addToCookieJar(RemoteCallHandler.java:529) at com.agst.rmi.RemoteCallHandler.call(RemoteCallHandler.java:192) ... 6 more

    Read the article

  • Simple 301 redirect; how do I get it to drop the query string?

    - by Throlkim
    Hopefully there's a simple solution to this. I'm setting up a redirect for old search listings, but I don't want any of the query string to carry over. The current rule is: redirect 301 /blog http://newdomain.com But this would redirect the following: http://www.olddomain.com/blog/2009/11/article-name-etc To this: http://newdomain.com/2009/11/article-name-etc When I just want: http://newdomain.com Can this be done in the simple 301 redirect, or will I have to write a mod_rewrite rule?

    Read the article

  • Compression for custom mime type works on IIS 7.5, but not IIS 7.0?

    - by Doogal
    I've managed to set up compression for a custom mime type on IIS 7.5 with no problem. Add the mime type to IIS then add it to the httpCompression element in applicationHost.config. But when I do the same thing on IIS 7, the particular mime type is never compressed. This isn't a problem with compression in general since other mime types are compressed correctly. As far as I can tell, IIS 7 and IIS7.5 are configured in exactly the same way. Does IIS7 behave differently and do I need to do something else to get it working? I've setup failed request tracing and get a NO_MATCHING_CONTENT_TYPE error during compression but I can't figure out what else I need to do to tell IIS about my mime type

    Read the article

  • How can I turn on DynamicCompression feature of IIS programmatically?

    - by LockeVN
    I'm making an installer program for my web application. My web application uses CSS and JS heavily, so I want to enable both Static and Dynamic HttpCompression for IIS7/7.5. It needs 2 steps: I can modified the web.config, put <httpcompression> tag, it's ok. DynamicContentCompression must be turned on in Windows Feature to make httpCompression work. Static HttpCompression is enable by default in IIS7 and IIS7.5, but Dynamic HttpCompression is not enable by default (although it's available). I can do manually by turn on: Start/ControlPanel/ProgramsAndFeatures/TurnWindowsFeatures on or Off/IIS/WWW Service/Performance features/Dynamic Content Compression, but How can I programmatically turn it on that Windows Feature? I can use PowerShell, C# in my installer. Any idea how I might be able to do this? Thanks.

    Read the article

  • zlib memory usage / performance. With 500kb of data.

    - by unixman83
    Is zLib Worth it? Are there other better suited compressors? I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however. The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments? However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify? My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.

    Read the article

  • Compressing three individual jpeg pics containing temporal redundancy?

    - by michael
    I am interfacing and embedded device with a camera module that returns a single jpeg compressed frame each time I trigger it. I would like to take three successive shots (approx 1 frame per 1/4 second) and further compress the images into a single file. The assumption here is that there is a lot of temporal redundancy, therefore lots of room for more compression across the three frames (compared to sending three separate jpeg images). I will be implementing the solution on an embedded device in C without any libraries and no OS. The camera will be taking pics in an area with very little movement (no visitors or screens in the background, maybe a tree with swaying branches), so I think my assumption about redundancy is pretty solid. When the file is finally viewed on a pc/mac, I don't mind having to write something to extract the three frames (so it can be a nonstandard cluge) So I guess the actual question is: What is the best way to compress these three images together given the fact that they are already in JPEG format (it is a possibly to convert back to a raw image, but if i dont have too...)

    Read the article

  • Algorithms to find longest common prefix in a sliding window.

    - by nn
    Hi, I have written a Lempel Ziv compressor and decompressor. I am seeking to improve the time to search the dictionary for a phrase. I have considered K-M-P and Boyer-Moore, but I think an algorithm that adapts to changes in the dictionary would be faster. I've been reading that binary search trees (AVL or with splays) improve the performance of compression time considerably. What I fail to understand is how to bootstrap the binary search tree and insert/remove data. I'm not actually quite sure the significance of each node in the binary search. I am searching for phrases so will each character be considered a node? Also how and what is inserted/removed from the search tree as new data enters the dictionary and old data is removed? The binary search tree sounds like a good payoff since it can adapt to the dictionary, but I'm just not quite sure of how it's used.

    Read the article

  • Best method to compress JSON string in term of performance and compress radio

    - by Eric Yin
    For a JSON string, contains all kinds of settings, numbers, string etc. Total JSON string fairly fall into 10k~50K range. I want to compress it before save to database. So I wonder which compress method should I choose, I am using c# 4, I know I can choose gzip and deflate but the compression radio is not good (although speed is good). More specific, compress can be a little slow (since only once) but should be small. Decompress should be lighting fast since decompress happens lots. Please give some advice.

    Read the article

  • Compressing as GZip WCF requests (SOAP and REST)

    - by Joannes Vermorel
    I have a .NET 3.5 web app hosted on Windows Azure that exposes several WCF endpoints (both SOAP and REST). The endpoints typically receive 100x more data than they serve (lot of data is upload, much fewer is downloaded). Hence, I am willing to take advantage from HTTP GZip compression but not from the server viewpoint, but rather from the client viewpoint, sending compressed requests (returning compressed responses would be fine, but won't bring much gain anyway). Here is the little C# snippet used on the client side to activate WCF: var binding = new BasicHttpBinding(); var address = new EndpointAddress(endPoint); _factory = new ChannelFactory<IMyApi>(binding, address); _channel = _factory.CreateChannel(); Any idea how to adjust the behavior so that compressed HTTP requests can be made?

    Read the article

  • How does git save space and is fast at the same time?

    - by eSKay
    I just saw the first git tutorial at http://blip.tv/play/Aeu2CAI How does git store all the versions of all the files and still be more economical in space than subversion which saves only the latest version of the code? I know this can be done using compression but that would be at the cost of speed, but this also says that git is much faster (though where is gains the max is the fact that most of its operations are offline). So, my guess is that git compresses data extensively it is still faster because uncompression + work is still faster than network_fetch + work Am I correct? even close?

    Read the article

  • Storage for large gridded datasets

    - by nullglob
    I am looking for a good storage format for large, gridded datasets. The application is meteorology, and we would prefer a format that is common within this field (to help exchange data with others). I don't need to deal with special data structures, and there should be a Fortran API. I am currently considering HDF5, GRIB2 and NetCDF4. How do these formats compare in terms of data compression? What are their main limitations? How steep is the learning curve? Are there any other storage formats worth investigating? I have not found a great deal of material outlining the differences and pros/cons of these formats (there is one relevant SO thread, and a presentation comparing GRIB and NetCDF).

    Read the article

  • searching within a compressed sorted fixed width file

    - by user275455
    Assume I have a regular compressed fixed width file that is sorted on one of the fields. Given that I know the length of the records, I can use lseek to implement a binary search to records with fields that match a given value without having to read the entire file. Now the difficulty is that the file is gzipped. Is it possible to do this without completely inflating the file? If not with gzip. is there any compression that supports this kind of behavior?

    Read the article

  • How to efficiently deal with a large amount of HTML5 canvas pixel data over websockets

    - by user730569
    Using imageData = context.getImageData(0, 0, width, height); JSON.stringify(imageData.data); I grab the pixel data, convert it to a string, and then send it over the wire via websockets. However, this string can be pretty large, depending on the size of the canvas object. I tried using the compression technique found here: JavaScript implementation of Gzip but socket.io throws the error Websocket message contains invalid character(s). Is there an effective way to compress this data so that it can be sent over websockets?

    Read the article

  • How can I compress jpeg images in Java without losing any metadata in that image?

    - by guitarpoet
    I want compress jpeg files using Java. I do it like this: Read the image as BufferedImage Write the image to another file with compression rate. OK, that seems easy, but I find the ICC color profile and the EXIF information are gone in the new file and the DPI of the image is dropped from 240 to 72. It looks different from the origin image. I use a tool like preview in OS X. It can perfectly change the quality of the image without affecting other information. Can I done this in Java? At least keep the ICC color profile and let the image color look the same as the origin photo?

    Read the article

  • Compressing digitalized document images

    - by Adabada
    Hello, We are now required by law to digitalize all the financial documents in our company and submit them to evaluations every 3 months. Since this is sensitive data we decided to take matters into our own hands and build some sort of digital data archiver. The tool works perfectly, but after 7 months of usage we are begining to worry about the disk space used by these images. Here some info on the amount of documents digitalized: 15K documents scanned and archived per day, with final PNG size of +- 860KB: 15 000 * 860 kilobits = 1.53779984 gigabytes 30 days of work per month: 1.53779984 gigabytes * 30 = 46.1339952 gigabytes Expectation of disk space usage after 1 year: 46.1339952 gigabytes * 12 = 553.607942 gigabytes So far we're at 424 gigabytes of disk space used, without counting backup. We're using PNG as image format, but I would like to know if anyone have any advice on a better compression algorithm for images or alternative strategies for compressing the PNG's even more or even better ways to archive images as to save disk space. Any help would be appreciated, thanks.

    Read the article

  • how to compress a PNG image using Java

    - by 116213060698242344024
    Hi I would like to know if there is any way in Java to reduce the size of an image (use any kind of compression) that was loaded as a BufferedImage and is going to be saved as an PNG. Maybe some sort of png imagewriteparam? I didnt find anything helpful so im stuck. heres a sample how the image is loaded and saved public static BufferedImage load(String imageUrl) { Image image = new ImageIcon(imageUrl).getImage(); bufferedImage = new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_INT_ARGB); Graphics2D g2D = bufferedImage.createGraphics(); g2D.drawImage(image, 0, 0, null); return bufferedImage; } public static void storeImageAsPng(BufferedImage image, String imageUrl) throws IOException { ImageIO.write(image, "png", new File(imageUrl)); }

    Read the article

  • http.conf setup to simplify using 'localhost:81'

    - by Will
    I'm installing portable wampserver within my dropbox folder so I can access anywhere. I have this achieved and accessible using http://locahost:81 I want to access it by using a different address (dropping the :81 port number) such as http://myothersite. I'm fairly certain I need to add a virtualhosts directove somewhere within this, but I am not Apache experienced! This is the current Apache httpd.conf file: ServerRoot "C:/Users/will/Dropbox/Wampee-2.1-beta-2/bin/apache/apache2.2.17" Listen 81 ServerAdmin admin@localhost ServerName localhost:81 DocumentRoot "C:/Users/will/Dropbox/Wampee-2.1-beta-2/www/" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Users/will/Dropbox/Wampee-2.1-beta-2/www/"> Options Indexes FollowSymLinks AllowOverride all # onlineoffline tag - don't remove Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> <IfModule dir_module> DirectoryIndex index.php index.php3 index.html index.htm </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "C:/Users/will/Dropbox/Wampee-2.1-beta-2/logs/apache_error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "C:/Users/will/Dropbox/Wampee-2.1-beta-2/logs/access.log" common #CustomLog "logs/access.log" combined </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "cgi-bin/" </IfModule> <IfModule cgid_module> #Scriptsock logs/cgisock </IfModule> <Directory "cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .php3 </IfModule> # Server-pool management (MPM specific) #Include conf/extra/httpd-mpm.conf # Multi-language error messages #Include conf/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include conf/extra/httpd-autoindex.conf # Language settings #Include conf/extra/httpd-languages.conf # User home directories #Include conf/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include conf/extra/httpd-info.conf # Virtual hosts #Include conf/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include conf/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include conf/extra/httpd-dav.conf # Various default settings #Include conf/extra/httpd-default.conf # Secure (SSL/TLS) connections #Include conf/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include "C:/Users/will/Dropbox/Wampee-2.1-beta-2/alias/*" Include "C:/Users/will/Dropbox/Wampee-2.1-beta-2/MyWebAp ps/etc/alias/*"

    Read the article

  • IIS6 all websites displays another site when using https

    - by Lisa
    I have the following websites set up in iis 6. site1.com site2.com site3.com Accessing site1 is via the address https://site1.com. Accessing site2 and site three should be through http. When I try to access https://site2.com it displays the website of https://site1.com. How can I stop this. I either want an error or rediericting to the http site. Any help would be great.

    Read the article

  • ErrorDocument not working when accessing .htaccess

    - by oxguy3
    I've been setting up ErrorDocuments for a website I'm working, and generally they've been working. However, after I set the 403 ErrorDocument, I noticed that it didn't work when I tried to access the .htaccess file itself. When I access a different forbidden file, the Error Document appears just fine. How can I make the ErrorDocument work on the .htaccess file? If you didn't follow my explanation, here are links to show you what I mean: ErrorDocument works fine: http://keycraft.haydencity.net/.ftpquota ErrorDocument doesn't work: http://keycraft.haydencity.net/.htaccess

    Read the article

  • OWA, Outlook Anywhere, RPCPing Inconsistencies

    - by pk.
    I'm troubleshooting an Outlook Anywhere issue with a new Exchange 2010 server. The server in question, MS2010, is behind a SonicWALL NSA 2400 device and works wonderfully except for Outlook Anywhere. Outlook Anywhere works internally and I've verified (through Ctrl-Right Click --> Connection Status) that I'm able to connect to MS2010 over HTTPS. When trying to connect to the server using HTTPS from outside the firewall, I'm unable to do so. A Wireshark trace shows 30 or so successful HTTPS packet transmissions, and then it fails with 3 straight transmissions to a destination port of 135. I have no idea why my computer is attempting to access anything on port 135 since I've setup my profile to use HTTPS on both slow and fast connections. I'm 99% certain that the firewall is configured correctly. I run Outlook Web Access (also HTTPS) on the same server and there are no issues with access. EDIT: My Autodiscover settings are correct (as far as I can tell). My server passes the Outlook Anywhere and Autodiscover tests at https://www.testexchangeconnectivity.com/. I've been using the RPCPing utility to troubleshoot and have come across the following results: Internally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v2.12. Copyright (C) Microsoft Corporation, 2002 OS Version is: 6.1, Service Pack 1 RPCPinging proxy server mail.mydomain.com with Echo Request Packet Sending ping to server Response from server received: 200 Pinging successfully completed in 93 ms Externally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v6.0. Copyright (C) Microsoft Corporation, 2002-2006 Enter password for RPC/HTTP proxy: RPCPing set Activity ID: {fc8411ba-2987-4175-b37b-801dc69d5ff9} RPCPinging proxy server mail.mydomain.com with Echo Request Packet Setting autologon policy to high WinHttpSetCredentials for target server called Error 87 : The parameter is incorrect. returned in WinHttpSetCredentials Ping failed What should I be checking in order to troubleshoot my Outlook Anywhere issues? I'm using Windows 7 SP1 for internal and external access. EDIT: Autodiscover.xml content <?xml version="1.0"?> <Autodiscover xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006"> <Response xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a"> <User> <DisplayName>John Doe</DisplayName> <LegacyDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=pk</LegacyDN> <DeploymentId>d35170cc-f3a7-42c5-9427-1f554a469126</DeploymentId> </User> <Account> <AccountType>email</AccountType> <Action>settings</Action> <Protocol> <Type>EXCH</Type> <Server>MS2010.MYDOMAIN.local</Server> <ServerDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010</ServerDN> <ServerVersion>738180DA</ServerVersion> <MdbDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010/cn=Microsoft Private MDB</MdbDN> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> <OOFUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</OOFUrl> <OABUrl>http://MS2010.MYDOMAIN.local/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://MS2010.MYDOMAIN.local/EWS/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <PublicFolderServer>MS2007.MYDOMAIN.local</PublicFolderServer> <AD>dc1.MYDOMAIN.local</AD> <EwsUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</EwsUrl> <EcpUrl>https://MS2010.MYDOMAIN.local/ecp/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>EXPR</Type> <Server>mail.mycompany.com</Server> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> <OOFUrl>https://mail.mycompany.com/ews/exchange.asmx</OOFUrl> <OABUrl>https://mail.mycompany.com/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://mail.mycompany.com/ews/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <SSL>On</SSL> <AuthPackage>Basic</AuthPackage> <CertPrincipalName>msstd:mail.mycompany.com</CertPrincipalName> <EwsUrl>https://mail.mycompany.com/ews/exchange.asmx</EwsUrl> <EcpUrl>https://mail.mycompany.com/owa/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>WEB</Type> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <Internal> <OWAUrl AuthenticationMethod="Basic, Fba">https://MS2010.MYDOMAIN.local/owa/</OWAUrl> <Protocol> <Type>EXCH</Type> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> </Protocol> </Internal> <External> <OWAUrl AuthenticationMethod="Fba">https://mail.mycompany.com/owa/</OWAUrl> <Protocol> <Type>EXPR</Type> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> </Protocol> </External> </Protocol> </Account> </Response> </Autodiscover>

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • Internet Working, Browsing Not.

    - by jeffreypriebe
    I have a very odd problem that I can't resolve. I am connected to the internet, but my browsing doesn't work. I don't mean a web browser - I mean browsing. Firefox, Chrome, Curl all fail to successfully connect to an HTTP address. However existing connections, e.g. to mail in Outlook (Exchange Server and also IMAP server) continue to work. Also, the internet is on, I can confirm both from my machine (other ports / connections) as well as from any other computer connected to the same network. Additionally, it appears to be HTTP, not simple a port issue as HTTP over port 8443 (Tortoise SVN if you must know - running over HTTP not over SVN) also fails. I am using Windows Vista SP2 (build 6002). It seems to "creep up" in that after running the computer for a few hours it will fail. (No found way to systematically reproduce the problem.) Additionally, it seems to be more prone on days where the internet connection is flaky already (not sure why the internet is flaky, just is, lot's of failed browsing requests and have to retry/reload often). What I have tried (when the problem arises) - none have yielded any resolution: Resetting the network connection (dis-connect, re-connect) Disable/re-enable the network adapter Double-checked the ip settings Double-checked the HOSTS file. Note: DNS continues to work (both new and cached responses to DNS queries). (Thanks for the suggestion Daniel and antenore.) Checked the routing tables (ip4 only as ipv6 is beyond my understanding) resetting all involved hardware (routers and modems) Close and reopen browsers Looked for malware interference: Run HijackThis Looked for suspicious processes using SysInternals procexp. Looked for explorer hijacks, lsa provider interference, winsock provider interference using SysInternals Autoruns. Run a complete anti-virus scan. Reviewed the output of a netstat -onab to see if there were stuck ports open or unusual processes running somewhere The only thing that works is to do a full reboot. That works 100% of the time to restore browsing. What else can I try to nail down the problem?

    Read the article

  • Nginx 500 Internal Server error on subdirectory

    - by juyoung518
    I'm getting a 500 Internal Server error only on sub directories. For example, If my website is example.com, example.com/index.php works. But example.com/phpbb/index.php doesn't work. It just turns up a blank php page. The HTTP header shows HTTP error 500 Internal Server error. If I enter example.com/phpbb/index.php/somedirectory, the index.php of my root directory shows up. This is all very strange. I have tried searching etc but nothing worked. tried re-installing nginx but not fixed. I'm sure I got the DNS configured right. My Nginx Config /sites-available/example.com server { server_name www.example.com; return 301 https://example.com$request_uri; } server { listen 443; listen 80; #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/example.com/public_html; index index.html index.php index.htm; ssl on; ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/ssl.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ssl_stapling on; resolver 8.8.8.8; add_header Strict-Transport-Security max-age=63072000; # Make site accessible from http://localhost/ server_name example.com; location ~* \.(jpg|jpeg|png|gif|ico|css|js|bmp)$ { expires 365d; add_header Cache-Control public; } if ($scheme = http) { return 301 https://example.com$request_uri; } location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } if ($http_user_agent ~ (musobot|screenshot|AhrefsBot|picsearch|Gender|HostTracker|Java/1.7.0_51|Java) ) { return 403; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } location /phpMyAdmin { rewrite ^/* /phpmyadmin last; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } } nginx.conf user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## Block spammers and other unwanted visitors ## include /etc/nginx/blockips.conf; fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m; ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 100; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log off; error_log /var/log/nginx/error.log; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ## # File Cache Settings ## open_file_cache max=5000 inactive=5m; open_file_cache_valid 2m; open_file_cache_min_uses 1; open_file_cache_errors on; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/x-js text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*;

    Read the article

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • Untraceable malicious browser calls

    - by MaximusOMaximus
    I installed Fiddler 4 Beta to do some HTTP tracing. I found a lot of calls being made to sites like : facebook, collegehumor and a bunch of other sites I've never visited. Could not trace what/who is initiating these calls as I do not see any Windows Processes. No one else is connected to my network. I use both Google Chrome and IE10 on a Windows 7 box. Please help me tracing and removing these malicious HTTP calls.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >