Search Results

Search found 50980 results on 2040 pages for 'http compression'.

Page 144/2040 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Why is my NSURLConnection so slow?

    - by Bama91
    I have setup an NSURLConnection using the guidelines in Using NSURLConnection from the Mac OS X Reference Library, creating an NSMutableURLRequest as POST to a PHP script with a custom body to upload 20 MB of data (see code below). Note that the post body is what it is (line breaks and all) to exactly match an existing desktop implementation. When I run this in the iPhone simulator, the post is successful, but takes an order of magnitude longer than if I run the equivalent code locally on my Mac in C++ (20 minutes vs. 20 seconds, respectively). Any ideas why the difference is so dramatic? I understand that the simulator will give different results than the actual device, but I would expect at least similar results. Thanks const NSUInteger kDataSizePOST = 20971520; const NSString* kServerBDC = @"WWW.SOMEURL.COM"; const NSString* kUploadURL = @"http://WWW.SOMEURL.COM/php/test/upload.php"; const NSString* kUploadFilename = @"test.data"; const NSString* kUsername = @"testuser"; const NSString* kHostname = @"testhost"; srandom(time(NULL)); NSMutableData *uniqueData = [[NSMutableData alloc] initWithCapacity:kDataSizePOST]; for (unsigned int i = 0 ; i < kDataSizePOST ; ++i) { u_int32_t randomByte = ((random() % 95) + 32); [uniqueData appendBytes:(void*)&randomByte length:1]; } NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init]; [request setURL:[NSURL URLWithString:kUploadURL]]; [request setHTTPMethod:@"POST"]; NSString *boundary = @"aBcDd"; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; NSMutableData *postbody = [NSMutableData data]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[uniqueData length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: form-data; name=test; filename=%@", kUploadFilename] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithString:@";\nContent-Type: multipart/mixed;\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[NSData dataWithData:uniqueData]]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[kUsername length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: inline; name=Username;\n\r\n%@",kUsername] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[kHostname length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: inline; name=Hostname;\n\r\n%@",kHostname] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\n--%@--",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [request setHTTPBody:postbody]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (theConnection) { _receivedData = [[NSMutableData data] retain]; } else { // Inform the user that the connection failed. } [request release]; [uniqueData release];

    Read the article

  • Deploying a PHP Library project with Maven

    - by Marco
    Hi, I've created a PHP Library project using Maven, and I'm now ready for its deployment. Following the instructions at http://www.php-maven.org/deploy.html, something went wrong. The configuration is set to: <descriptorRef>php-lib</descriptorRef> During the execution of mvn deploy I get a list of errors for unfound dependencies in the repository: [INFO] [jar:jar {execution: default-jar}] [INFO] Building jar: /home/marco/projects/php/my-app/target/my-app-1.0-SNAPSHOT.jar [INFO] [plugin:addPluginArtifactMetadata {execution: default-addPluginArtifactMetadata}] Downloading: http://repo1.php-maven.org/release/org/phpmaven/maven-php-plugin/2.2-beta-2/maven-php-plugin-2.2-beta-2.pom [INFO] Unable to find resource 'org.phpmaven:maven-php-plugin:pom:2.2-beta-2' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.maven.org/maven2/org/phpmaven/maven-php-plugin/2.2-beta-2/maven-php-plugin-2.2-beta-2.pom [INFO] Unable to find resource 'org.phpmaven:maven-php-plugin:pom:2.2-beta-2' in repository central (http://repo1.maven.org/maven2) Downloading: http://repo1.php-maven.org/release/org/phpmaven/maven-php-plugin/2.2-beta-2/maven-php-plugin-2.2-beta-2.pom [INFO] Unable to find resource 'org.phpmaven:maven-php-plugin:pom:2.2-beta-2' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.maven.org/maven2/org/phpmaven/maven-php-plugin/2.2-beta-2/maven-php-plugin-2.2-beta-2.pom [INFO] Unable to find resource 'org.phpmaven:maven-php-plugin:pom:2.2-beta-2' in repository central (http://repo1.maven.org/maven2) Downloading: http://repo1.php-maven.org/release/org/apache/maven/wagon/wagon-http-shared/1.0-beta-6/wagon-http-shared-1.0-beta-6.pom [INFO] Unable to find resource 'org.apache.maven.wagon:wagon-http-shared:pom:1.0-beta-6' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.php-maven.org/release/org/apache/maven/wagon/wagon-http-shared/1.0-beta-6/wagon-http-shared-1.0-beta-6.pom [INFO] Unable to find resource 'org.apache.maven.wagon:wagon-http-shared:pom:1.0-beta-6' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.maven.org/maven2/org/apache/maven/wagon/wagon-http-shared/1.0-beta-6/wagon-http-shared-1.0-beta-6.pom Downloading: http://repo1.php-maven.org/release/nekohtml/xercesMinimal/1.9.6.2/xercesMinimal-1.9.6.2.pom [INFO] Unable to find resource 'nekohtml:xercesMinimal:pom:1.9.6.2' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.php-maven.org/release/nekohtml/xercesMinimal/1.9.6.2/xercesMinimal-1.9.6.2.pom [INFO] Unable to find resource 'nekohtml:xercesMinimal:pom:1.9.6.2' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.maven.org/maven2/nekohtml/xercesMinimal/1.9.6.2/xercesMinimal-1.9.6.2.pom And this is my settings.xml file: <settings> <profiles> <profile> <id>profile-php-maven</id> <pluginRepositories> <pluginRepository> <id>release-repo1.php-maven.org</id> <name>PHP-Maven 2 Release Repository</name> <url>http://repo1.php-maven.org/release</url> <releases> <enabled>true</enabled> </releases> </pluginRepository> <pluginRepository> <id>snapshot-repo1.php-maven.org</id> <name>PHP-Maven 2 Snapshot Repository</name> <url>http://repo1.php-maven.org/snapshot</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> <repositories> <repository> <id>release-repo1.php-maven.org</id> <name>PHP-Maven 2 Release Repository</name> <url>http://repo1.php-maven.org/release</url> <releases> <enabled>true</enabled> </releases> </repository> <repository> <id>snapshot-repo1.php-maven.org</id> <name>PHP-Maven 2 Snapshot Repository</name> <url>http://repo1.php-maven.org/snapshot</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> </profile> </profiles> <activeProfiles> <activeProfile>profile-php-maven</activeProfile> </activeProfiles> </settings> For every step I've followed the documentation (which is poor, though). Any tips? Thanks

    Read the article

  • CakePHP sending multiple lines of same cookie information in response header. why??

    - by Vicer
    Hi all, I am having this problem with one of my cakePHP applications. My application displays a big html table on the page. I noticed that when the table goes beyond a certain size limit, IE cannot display that page. While trying to figure out why this happens, I noticed that my html response header contains a HUGE number of lines repeating the same thing. Response Headers ------------------------------ Date Thu, 20 May 2010 04:18:10 GMT Server Apache/2.0.63 (Win32) mod_ssl/2.0.63 OpenSSL/0.9.7m PHP/5.2.10 X-Powered-By PHP/5.2.10 P3P CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Set-Cookie CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp Keep-Alive timeout=15, max=100 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html ========================================================================== Request Headers -------------------------- Host localhost:8080 User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729) Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://localhost:8080/mywebapp/section2/bigtable/1 Cookie CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1 Authorization Basic cGt1bWFyYTpwa3VtYXJh Notice the huge set of repeated lines at 'Set-Cookie' in the Response Header. This only happens when I try to display large tables. Does anyone have any clue to what might be causing this? Any help to find the issue is appreciated. I am using CakePHP 1.2.5. As far as I know, I am not messing with any set cookie functions. But yes I am using session variables.

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • Reverse engineering windows mobile live search CellID location awareness protocol (yikes)...

    - by Jean-Charles
    I wasn't sure of how to form the question so I apologize if the title is misleading. Additionally, you may want to get some coffee and take a seat for this one ... It's long. Basically, I'm trying to reverse engineer the protocol used by the Windows Mobile Live Search application to get location based on cellID. Before I go on, I am aware of other open source services (such as OpenCellID) but this is more for the sake of education and a bit for redundancy. According to the packets I captured, a POST request is made to ... mobile.search.live.com/positionlookupservice_1/service.aspx ... with a few specific headers (agent, content-length, etc) and no body. Once this goes through, the server sends back a 100-Continue response. At this point, the application submits this data (I chopped off the packet header): 00 00 00 01 00 00 00 05 55 54 ........UT 46 2d 38 05 65 6e 2d 55 53 05 65 6e 2d 55 53 01 F-8.en-US.en-US. 06 44 65 76 69 63 65 05 64 75 6d 6d 79 01 06 02 .Device.dummy... 50 4c 08 0e 52 65 76 65 72 73 65 47 65 6f 63 6f PL..ReverseGeoco 64 65 01 07 0b 47 50 53 43 68 69 70 49 6e 66 6f de...GPSChipInfo 01 20 06 09 43 65 6c 6c 54 6f 77 65 72 06 03 43 . ..CellTower..C 47 49 08 03 4d 43 43 b6 02 07 03 4d 4e 43 03 34 GI..MCC....MNC.4 31 30 08 03 4c 41 43 cf 36 08 02 43 49 fd 01 00 10..LAC.6..CI... 00 00 00 ... And receives this in response (packet and HTTP response headers chopped): 00 00 00 01 00 00 00 00 01 06 02 50 4c ...........PL 06 08 4c 6f 63 61 6c 69 74 79 06 08 4c 6f 63 61 ..Locality..Loca 74 69 6f 6e 07 03 4c 61 74 09 34 32 2e 33 37 35 tion..Lat.42.375 36 32 31 07 04 4c 6f 6e 67 0a 2d 37 31 2e 31 35 621..Long.-71.15 38 39 33 38 00 07 06 52 61 64 69 75 73 09 32 30 8938...Radius.20 30 30 2e 30 30 30 30 00 42 07 0c 4c 6f 63 61 6c 00.0000.B..Local 69 74 79 4e 61 6d 65 09 57 61 74 65 72 74 6f 77 ityName.Watertow 6e 07 16 41 64 6d 69 6e 69 73 74 72 61 74 69 76 n..Administrativ 65 41 72 65 61 4e 61 6d 65 0d 4d 61 73 73 61 63 eAreaName.Massac 68 75 73 65 74 74 73 07 10 50 6f 73 74 61 6c 43 husetts..PostalC 6f 64 65 4e 75 6d 62 65 72 05 30 32 34 37 32 07 odeNumber.02472. 0b 43 6f 75 6e 74 72 79 4e 61 6d 65 0d 55 6e 69 .CountryName.Uni 74 65 64 20 53 74 61 74 65 73 00 00 00 ted States... Now, here is what I've determined so far: All strings are prepended with one byte that is the decimal equivalent of their length. There seem to be three different casts that are used throughout the request and response. They show up as one byte before the length byte. I've concluded that the three types map out as follows: 0x06 - parent element (subsequent values are children, closed with 0x00) 0x07 - string 0x08 - int? Based on these determinations, here is what the request and response look like in a more readable manner (values surrounded by brackets denote length and values surrounded by parenthesis denote a cast): \0x00\0x00\0x00\0x01\0x00\0x00\0x00 [5]UTF-8 [5]en-US [5]en-US \0x01 [6]Device [5]dummy \0x01 (6)[2]PL (8)[14]ReverseGeocode\0x01 (7)[11]GPSChipInfo[1]\0x20 (6)[9]CellTower (6)[3]CGI (8)[3]MCC\0xB6\0x02 //310 (7)[3]MNC[3]410 //410 (8)[3]LAC\0xCF\0x36 //6991 (8)[2]CI\0xFD\0x01 //259 \0x00 \0x00 \0x00 \0x00 and.. \0x00\0x00\0x00\0x01\0x00\0x00\0x00 \0x00\0x01 (6)[2]PL (6)[8]Locality (6)[8]Location (7)[3]Lat[9]42.375621 (7)[4]Long[10]-71.158938 \0x00 (7)[6]Radius[9]2000.0000 \0x00 \0x42 //"B" ... Has to do with GSM (7)[12]LocalityName[9]Watertown (7)[22]AdministrativeAreaName[13]Massachusetts (7)[16]PostalCodeNumber[5]02472 (7)[11]CountryName[13]United States \0x00 \0x00\0x00 My analysis seems to work out pretty well except for a few things: The 0x01s throughout confuse me ... At first I thought they were some sort of base level element terminators but I'm not certain. I'm not sure the 7-byte header is, in fact, a seven byte header. I wonder if it's maybe 4 bytes and that the three remaining 0x00s are of some other significance. The trailing 0x00s. Why is it that there is only one on the request but two on the response? The type 8 cast mentioned above ... I can't seem to figure out how those values are being encoded. I added comments to those lines with what the values should correspond to. Any advice on these four points will be greatly appreciated. And yes, these packets were captured in Watertown, MA. :)

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • Problem when trying to update "Duplicate sources.list"

    - by Coca Akat
    I got this problem when trying to update using sudo apt-get update W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ saucy-backports/multiverse amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy-backports_multiverse_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ saucy-backports/multiverse i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy-backports_multiverse_binary-i386_Packages) W: You may want to run apt-get update to correct these problems This is my souces.list : # deb cdrom:[Ubuntu 13.10 _Saucy Salamander_ - Release amd64 (20131016.1)]/ saucy main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu saucy main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu saucy-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy universe deb http://archive.ubuntu.com/ubuntu saucy-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu saucy multiverse deb http://archive.ubuntu.com/ubuntu saucy-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu saucy-security main restricted deb http://archive.ubuntu.com/ubuntu saucy-security universe deb http://archive.ubuntu.com/ubuntu saucy-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu saucy main # deb-src http://extras.ubuntu.com/ubuntu saucy main # deb http://archive.canonical.com/ saucy partner # deb-src http://archive.canonical.com/ saucy partner # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy-backports multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software.

    Read the article

  • WIF-less claim extraction from ACS: JWT

    - by Elton Stoneman
    ACS support for JWT still shows as "beta", but it meets the spec and it works nicely, so it's becoming the preferred option as SWT is losing favour. (Note that currently ACS doesn’t support JWT encryption, if you want encrypted tokens you need to go SAML). In my last post I covered pulling claims from an ACS token without WIF, using the SWT format. The JWT format is a little more complex, but you can still inspect claims just with string manipulation. The incoming token from ACS is still presented in the BinarySecurityToken element of the XML payload, with a TokenType of urn:ietf:params:oauth:token-type:jwt: <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:39:55.337Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:19:55.337Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="_1eeb5cf4-b40b-40f2-89e0-a3343f6bd985-6A15D1EED0CDB0D8FA48C7D566232154" ValueType="urn:ietf:params:oauth:token-type:jwt" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">[ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>urn:ietf:params:oauth:token-type:jwt</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> The token as a whole needs to be base-64 decoded. The decoded value contains a header, payload and signature, dot-separated; the parts are also base-64, but they need to be decoded using a no-padding algorithm (implementation and more details in this MSDN article on validating an Exchange 2013 identity token). The values are then in JSON; the header contains the token type and the hashing algorithm: "{"typ":"JWT","alg":"HS256"}" The payload contains the same data as in the SWT, but JSON rather than querystring format: {"aud":"http://localhost/x.y.z" "iss":"https://adfstest-bhw.accesscontrol.windows.net/" "nbf":1346398795 "exp":1346404795 "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant":"2012-08-31T07:39:53.652Z" "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows" "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname":"xyz" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":"[email protected]" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"[email protected]" "identityprovider":"http://fs.svc.x.y.z.com/adfs/services/trust"} The signature is in the third part of the token. Unlike SWT which is fixed to HMAC-SHA-256, JWT can support other protocols (the one in use is specified as the "alg" value in the header). How to: Validate an Exchange 2013 identity token contains an implementation of a JWT parser and validator; apart from the custom base-64 decoding part, it’s very similar to SWT extraction. I've wrapped the basic SWT and JWT in a ClaimInspector.aspx page on gitHub here: SWT and JWT claim inspector. You can drop it into any ASP.Net site and set the URL to be your redirect page in ACS. Swap ACS to issue SWT or JWT, and using the same page you can inspect the claims that come out.

    Read the article

  • Linux, some packets are not being NAT

    - by user70932
    Hi, I'm trying to NAT HTTP traffic, I'm new to this and facing some issues. What i'm trying to do is NAT client HTTP requests to a webserver. CLIENT - NAT BOX - WEBSERVER When the client open the IP of the NAT BOX, the request should be pass to the web server. But I'm getting "HTTP request sent, awaiting response..." and then wait serveral minutes before the request is done. Looking at the tcpdump output, it looks like the first Syn packet on (10:48:54) is being NAT but not the second, third, fourth... ACK or PSH packets, and wait until (10:52:04) it starts NAT again on the ACK packet. The iptables command I'm using is: iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 \ -j DNAT --to-destination WEBSERVER I'm wondering what could have caused this behavior? Thanks alot. 10:48:54.907861 IP (tos 0x0, ttl 49, id 16395, offset 0, flags [DF], proto: TCP (6), length: 48) CLIENT.61736 > NATBOX.http: S, cksum 0x6019 (correct), 1589600740:1589600740(0) win 5840 <mss 1460,nop,wscale 8> 10:48:54.907874 IP (tos 0x0, ttl 48, id 16395, offset 0, flags [DF], proto: TCP (6), length: 48) CLIENT.61736 > WEBSERVER.http: S, cksum 0xb5d7 (correct), 1589600740:1589600740(0) win 5840 <mss 1460,nop,wscale 8> 10:48:55.102696 IP (tos 0x0, ttl 49, id 16397, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x2727 (correct), ack 2950613896 win 23 10:48:55.102963 IP (tos 0x0, ttl 49, id 16399, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:48:58.103078 IP (tos 0x0, ttl 49, id 16401, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:48:58.366344 IP (tos 0x0, ttl 49, id 16403, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:49:04.103204 IP (tos 0x0, ttl 49, id 16405, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:49:04.363943 IP (tos 0x0, ttl 49, id 16407, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:49:16.101583 IP (tos 0x0, ttl 49, id 16409, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:49:16.363475 IP (tos 0x0, ttl 49, id 16411, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:49:40.100796 IP (tos 0x0, ttl 49, id 16413, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:49:40.563898 IP (tos 0x0, ttl 49, id 16415, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:50:28.099396 IP (tos 0x0, ttl 49, id 16417, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:50:28.761678 IP (tos 0x0, ttl 49, id 16419, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:52:04.093668 IP (tos 0x0, ttl 49, id 16421, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:52:04.093678 IP (tos 0x0, ttl 48, id 16421, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > WEBSERVER.http: P 1589600741:1589600861(120) ack 2950613896 win 23 10:52:04.291021 IP (tos 0x0, ttl 49, id 16423, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x25d3 (correct), ack 217 win 27 10:52:04.291028 IP (tos 0x0, ttl 48, id 16423, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > WEBSERVER.http: ., cksum 0x7b91 (correct), ack 217 win 27 10:52:04.300708 IP (tos 0x0, ttl 49, id 16425, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x253c (correct), ack 368 win 27 10:52:04.300714 IP (tos 0x0, ttl 48, id 16425, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > WEBSERVER.http: ., cksum 0x7afa (correct), ack 368 win 27 10:52:04.301417 IP (tos 0x0, ttl 49, id 16427, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: F, cksum 0x253b (correct), 120:120(0) ack 368 win 27 10:52:04.301438 IP (tos 0x0, ttl 48, id 16427, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > WEBSERVER.http: F, cksum 0x7af9 (correct), 120:120(0) ack 368 win 27 10:52:04.498875 IP (tos 0x0, ttl 49, id 16429, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x253a (correct), ack 369 win 27 10:52:04.498881 IP (tos 0x0, ttl 48, id 16429, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > WEBSERVER.http: ., cksum 0x7af8 (correct), ack 369 win 27

    Read the article

  • Error data: line 2 column 1 when using pycurl with gzip stream

    - by Sagar Hatekar
    Thanks for reading. Background: I am trying to read a streaming API feed that returns data in JSON format, and then storing this data to a pymongo collection. The streaming API requires a "Accept-Encoding" : "Gzip" header. What's happening: Code fails on json.loads and outputs - Extra data: line 2 column 1 - line 4 column 1 (char 1891 - 5597) (Refer Error Log below) This does NOT happen while parsing every JSON object - it happens at random. My guess is I am encountering some weird JSON object after every "x" proper JSON objects. I did reference how to use pycurl if requested data is sometimes gzipped, sometimes not? and Encoding error while deserializing a json object from Google but so far have been unsuccessful at resolving this error. Could someone please help me out here? Error Log: Note: The raw dump of the JSON object below is basically using the repr() method. '{"id":"tag:search.twitter.com,2005:207958320747782146","objectType":"activity","actor":{"objectType":"person","id":"id:twitter.com:493653150","link":"http://www.twitter.com/Deathnews_7_24","displayName":"Death News 7/24","postedTime":"2012-02-16T01:30:12.000Z","image":"http://a0.twimg.com/profile_images/1834408513/deathnewstwittersquare_normal.jpg","summary":"Crashes, Murders, Suicides, Accidents, Crime and Naturals Death News From All Around World","links":[{"href":"http://www.facebook.com/DeathNews724","rel":"me"}],"friendsCount":56,"followersCount":14,"listedCount":1,"statusesCount":1029,"twitterTimeZone":null,"utcOffset":null,"preferredUsername":"Deathnews_7_24","languages":["tr"]},"verb":"post","postedTime":"2012-05-30T22:15:02.000Z","generator":{"displayName":"web","link":"http://twitter.com"},"provider":{"objectType":"service","displayName":"Twitter","link":"http://www.twitter.com"},"link":"http://twitter.com/Deathnews_7_24/statuses/207958320747782146","body":"Kathi Kamen Goldmark, Writers\xe2\x80\x99 Catalyst, Dies at 63 http://t.co/WBsNlNtA","object":{"objectType":"note","id":"object:search.twitter.com,2005:207958320747782146","summary":"Kathi Kamen Goldmark, Writers\xe2\x80\x99 Catalyst, Dies at 63 http://t.co/WBsNlNtA","link":"http://twitter.com/Deathnews_7_24/statuses/207958320747782146","postedTime":"2012-05-30T22:15:02.000Z"},"twitter_entities":{"urls":[{"display_url":"nytimes.com/2012/05/30/boo\xe2\x80\xa6","indices":[52,72],"expanded_url":"http://www.nytimes.com/2012/05/30/books/kathi-kamen-goldmark-writers-catalyst-dies-at-63.html","url":"http://t.co/WBsNlNtA"}],"hashtags":[],"user_mentions":[]},"gnip":{"language":{"value":"en"},"matching_rules":[{"value":"url_contains: nytimes.com","tag":null}],"klout_score":11,"urls":[{"url":"http://t.co/WBsNlNtA","expanded_url":"http://www.nytimes.com/2012/05/30/books/kathi-kamen-goldmark-writers-catalyst-dies-at-63.html?_r=1"}]}}\r\n{"id":"tag:search.twitter.com,2005:207958321003638785","objectType":"activity","actor":{"objectType":"person","id":"id:twitter.com:178760897","link":"http://www.twitter.com/Mobanu","displayName":"Donald Ochs","postedTime":"2010-08-15T16:33:56.000Z","image":"http://a0.twimg.com/profile_images/1493224811/small_mobany_Logo_normal.jpg","summary":"","links":[{"href":"http://www.mobanuweightloss.com","rel":"me"}],"friendsCount":10272,"followersCount":9698,"listedCount":30,"statusesCount":725,"twitterTimeZone":"Mountain Time (US & Canada)","utcOffset":"-25200","preferredUsername":"Mobanu","languages":["en"],"location":{"objectType":"place","displayName":"Crested Butte, Colorado"}},"verb":"post","postedTime":"2012-05-30T22:15:02.000Z","generator":{"displayName":"twitterfeed","link":"http://twitterfeed.com"},"provider":{"objectType":"service","displayName":"Twitter","link":"http://www.twitter.com"},"link":"http://twitter.com/Mobanu/statuses/207958321003638785","body":"Mobanu: Can Exercise Be Bad for You?: Researchers have found evidence that some people who exercise do worse on ... http://t.co/mTsQlNQO","object":{"objectType":"note","id":"object:search.twitter.com,2005:207958321003638785","summary":"Mobanu: Can Exercise Be Bad for You?: Researchers have found evidence that some people who exercise do worse on ... http://t.co/mTsQlNQO","link":"http://twitter.com/Mobanu/statuses/207958321003638785","postedTime":"2012-05-30T22:15:02.000Z"},"twitter_entities":{"urls":[{"display_url":"nyti.ms/KUmmMa","indices":[116,136],"expanded_url":"http://nyti.ms/KUmmMa","url":"http://t.co/mTsQlNQO"}],"hashtags":[],"user_mentions":[]},"gnip":{"language":{"value":"en"},"matching_rules":[{"value":"url_contains: nytimes.com","tag":null}],"klout_score":12,"urls":[{"url":"http://t.co/mTsQlNQO","expanded_url":"http://well.blogs.nytimes.com/2012/05/30/can-exercise-be-bad-for-you/?utm_medium=twitter&utm_source=twitterfeed"}]}}\r\n' json exception: Extra data: line 2 column 1 - line 4 column 1 (char 1891 - 5597) Header Output: HTTP/1.1 200 OK Content-Type: application/json; charset=UTF-8 Vary: Accept-Encoding Date: Wed, 30 May 2012 22:14:48 UTC Connection: close Transfer-Encoding: chunked Content-Encoding: gzip get_stream.py: #!/usr/bin/env python import sys import pycurl import json import pymongo STREAM_URL = "https://stream.test.com:443/accounts/publishers/twitter/streams/track/Dev.json" AUTH = "userid:passwd" DB_HOST = "127.0.0.1" DB_NAME = "stream_test" class StreamReader: def __init__(self): try: self.count = 0 self.buff = "" self.mongo = pymongo.Connection(DB_HOST) self.db = self.mongo[DB_NAME] self.raw_tweets = self.db["raw_tweets_gnip"] self.conn = pycurl.Curl() self.conn.setopt(pycurl.ENCODING, 'gzip') self.conn.setopt(pycurl.URL, STREAM_URL) self.conn.setopt(pycurl.USERPWD, AUTH) self.conn.setopt(pycurl.WRITEFUNCTION, self.on_receive) self.conn.setopt(pycurl.HEADERFUNCTION, self.header_rcvd) while True: self.conn.perform() except Exception as ex: print "error ocurred : %s" % str(ex) def header_rcvd(self, header_data): print header_data def on_receive(self, data): temp_data = data self.buff += data if data.endswith("\r\n") and self.buff.strip(): try: tweet = json.loads(self.buff, encoding = 'UTF-8') self.buff = "" if tweet: try: self.raw_tweets.insert(tweet) except Exception as insert_ex: print "Error inserting tweet: %s" % str(insert_ex) self.count += 1 if self.count % 10 == 0: print "inserted "+str(self.count)+" tweets" except Exception as json_ex: print "json exception: %s" % str(json_ex) print repr(temp_data) stream = StreamReader()

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • Sharepoint list fields stored in content database?

    - by nav
    Hi, I am trying to find out where data for Sharepoint list fields are stored in the content database. For example from the AllLists table filtering on the listid i am intrested in I can derive the following from the tp_ContentTypes column on the field I'm interested in: <Field Type="CascadingDropDownListFieldWithFilter" DisplayName="Secondary Subject" Required="FALSE" ID="{b4143ff9-d5a4-468f-8793-7bb2f06b02a0}" SourceID="{6c1e9bbf-4f02-49fd-8e6c-87dd9f26158a}" StaticName="Secondary_x0020_Subject" Name="Secondary_x0020_Subject" ColName="nvarchar13" RowOrdinal="0" Version="1"><Customization><ArrayOfProperty><Property><Name>SiteUrl</Name><Value xmlns:q1="http://www.w3.org/2001/XMLSchema" p4:type="q1:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">http://lvis.amberlnk.net/knowhow</Value></Property><Property><Name>CddlName</Name><Value xmlns:q2="http://www.w3.org/2001/XMLSchema" p4:type="q2:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">SS</Value></Property><Property><Name>CddlParentName</Name><Value xmlns:q3="http://www.w3.org/2001/XMLSchema" p4:type="q3:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">PS</Value></Property><Property><Name>CddlChildName</Name><Value xmlns:q4="http://www.w3.org/2001/XMLSchema" p4:type="q4:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance"/></Property><Property><Name>ListName</Name><Value xmlns:q5="http://www.w3.org/2001/XMLSchema" p4:type="q5:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Secondary</Value></Property><Property><Name>ListTextField</Name><Value xmlns:q6="http://www.w3.org/2001/XMLSchema" p4:type="q6:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Title</Value></Property><Property><Name>ListValueField</Name><Value xmlns:q7="http://www.w3.org/2001/XMLSchema" p4:type="q7:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Title</Value></Property><Property><Name>JoinField</Name><Value xmlns:q8="http://www.w3.org/2001/XMLSchema" p4:type="q8:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Primary</Value></Property><Property><Name>FilterField</Name><Value xmlns:q9="http://www.w3.org/2001/XMLSchema" p4:type="q9:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Content Type ID</Value></Property><Property><Name>FilterOperator</Name><Value xmlns:q10="http://www.w3.org/2001/XMLSchema" p4:type="q10:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Show All</Value></Property><Property><Name>FilterValue</Name><Value xmlns:q11="http://www.w3.org/2001/XMLSchema" p4:type="q11:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance"/></Property></ArrayOfProperty></Customization></Field> Which table do I need to query to find the data held on this field? Many Thanks Nav

    Read the article

  • Usefulness of packets in wireshark? SSDP protocol, rather than HTTP?

    - by Chris
    I used to be able to filter my wireshark packets to get useful information from them. However, with my current configuration on OSX, all of the HTTP traffic is coming through as the SSDP protocol and is generally being unhelpful. Why is this? Actually, it seems that packets on my own system that should be HTTP are coming throuhg as HTTP, but packets from other machines that should be HTTP are coming through as this protocol.

    Read the article

  • Configure VirtualHost to Rewrite HTTP://subdomain... to HTTPS://internaldirectory

    - by David Kaczynski
    How do I configure Apache to rewrite an http request for a subdomain to an https request for the correct directory? For example, I have the following VirtualHost configuration: However, this turns http://redmine...us into https://redmine...us/redmine. Also, changing RewriteRule ^(.*)$ https://%{HTTP_HOST}/redmine [R] to RewriteRule ^(.*)$ https://%{HTTP_HOST} [R] seems to simply redirect the HTTP request to HTTP://...us, which is currently the default /var/www/index.html page. Any suggestions?

    Read the article

  • How to redirect Cisco IOS's show output to HTTP URL?

    - by yegle
    I found there's a redirect output modifier of Cisco IOS ( version 12.2(53)SE1 ), and there's http: URI support: #sh version | redirect ? flash: Uniform Resource Locator ftp: Uniform Resource Locator http: Uniform Resource Locator https: Uniform Resource Locator nvram: Uniform Resource Locator rcp: Uniform Resource Locator scp: Uniform Resource Locator tftp: Uniform Resource Locator However, I cannot find any document on cisco.com about the http support. I tried sh version | redirect http://my_server/ and cannot find any information on my_server's access log. Can anyone give me a hint?

    Read the article

  • Apache2 graceful restart stops proxying requests to passenger

    - by Rob
    Issue with apache mod proxy, it stops proxying requests after a graceful restart but not all the time. It seems to happen only on a Sunday when a graceful restart is triggered by logrotate. [Sun Sep 9 05:25:06 2012] [notice] SIGUSR1 received. Doing graceful restart [Sun Sep 9 05:25:06 2012] [notice] Apache/2.2.22 (Ubuntu) Phusion_Passenger/3.0.11 configured -- resuming normal operations [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(492) failed in child 26153 for worker proxy:reverse [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(486) failed in child 26153 for worker http://api.myservice.org/api [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(487) failed in child 26153 for worker http://api.myservice.org/editor/$1 [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(489) failed in child 26153 for worker http://api.myservice.org/build [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(490) failed in child 26153 for worker http://api.myservice.org/help [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(491) failed in child 26153 for worker http://api.myservice.org/motd.html [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(480) failed in child 26153 for worker http://api.myservice.org/api [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(481) failed in child 26153 for worker http://api.myservice.org/editor/$1 [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(483) failed in child 26153 for worker http://api.myservice.org/build [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(484) failed in child 26153 for worker http://api.myservice.org/help [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(485) failed in child 26153 for worker http://api.myservice.org/motd.html [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(479) failed in child 26153 for worker http://api.myservice.org/motd.html After these lines, the logs are flooded with 404's because the requests are not being proxied. It's worth noting that the destination is just another vhost on the same apache instance, but the vhost (http://api.myservice.org) is serving passenger (mod_rails) I was thinking that maybe there's some startup issues with the passenger workers not being ready during a graceful restart? After a full restart resolves it and everything returns to normal. //Edit Here's the vhost config, thanks :) <VirtualHost *:80> UseCanonicalName Off LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon <Directory /var/www/vhosts> RewriteEngine on AllowOverride All </Directory> RewriteEngine on RewriteCond /var/www/vhosts/%{SERVER_NAME} !-d RewriteCond /var/www/vhosts/%{SERVER_NAME} !-l RewriteRule ^ http://sitenotfound.myservice.org/ [R=302,L] VirtualDocumentRoot /var/www/vhosts/%0/current # Rewrite requests to /assets to map to the /var/file-store/<SERVER_NAME>/ RewriteMap lowercase int:tolower RewriteCond %{REQUEST_URI} ^/assets/ RewriteRule ^/assets/(.*)$ /var/file-store/${lowercase:%{SERVER_NAME}}/$1 # Map /login to /editor.html as it's far friendlier. RewriteCond %{REQUEST_URI} ^/login RewriteRule .* /editor.html [PT] # Forward some requests to the API ProxyPass /api http://api.myservice.org/api ProxyPass /site.json http://api.myservice.org/api/editor/site ProxyPassMatch ^/editor/(.*)$ http://api.myservice.org/editor/$1 ProxyPassMatch ^/api/(.*) http://api.myservice.org/api/$1 ProxyPass /build http://api.myservice.org/build ProxyPass /help http://api.myservice.org/help ProxyPass /motd.html http://api.myservice.org/motd.html <Proxy *> Order allow,deny Allow from all </Proxy> # TODO generate slightly more specific Error Documents for 401/403/500's, # but for now the 404 page is good enough ErrorDocument 401 /404.html ErrorDocument 403 /404.html ErrorDocument 404 /404.html ErrorDocument 500 /404.html </VirtualHost>

    Read the article

  • High Profile ASP.NET websites

    - by nandos
    About twice a month I get asked to justify the reason "Why are we using ASP.NET and not PHP or Java, or buzz-word-of-the-month-here, etc". 100% of the time the questions come from people that do not understand anything about technology. People that would not know the difference between FTP and HTTP. The best approach I found (so far) to justify it to people without getting into technical details is to just say "XXX website uses it". Which I get back "Oh...I did not know that, so ASP.NET must be good". I know, I know, it hurts. But it works. So, without getting into the merit of why I'm using ASP.NET (which could trigger an endless argument for other platforms), I'm trying to compile a list of high profile websites that are implemented in ASP.NET. (No, they would have no idea what StackOverflow is). Can you name a high-profile website implemented in ASP.NET? EDIT: Current list (thanks for all the responses): (trying to avoid tech sites and prioritizing retail sites) Costco - http://www.costco.com/ Crate & Barrel - http://www.crateandbarrel.com/ Home Shopping Network - http://www.hsn.com/ Buy.com - http://www.buy.com/ Dell - http://www.dell.com Nasdaq - http://www.nasdaq.com/ Virgin - http://www.virgin.com/ 7-Eleven - http://www.7-eleven.com/ Carnival Cruise Lines - http://www.carnival.com/ L'Oreal - http://www.loreal.com/ The White House - http://www.whitehouse.gov/ Remax - http://www.remax.com/ Monster Jobs - http://www.monster.com/ USA Today - http://www.usatoday.com/ ComputerJobs.com - http://computerjobs.com/ Match.com - http://www.match.com National Health Services (UK) - http://www.nhs.uk/ CarrerBuilder.com - http://www.careerbuilder.com/

    Read the article

  • getting base url of web site's root (absolute/relative url)

    - by uzay95
    I want to completely understand how to use relative and absolute url address in static and dynamic files. ~ : / : .. : in a relative URL indicates the parent directory . : refers to the current directory / : always replaces the entire pathname of the base URL // : always replaces everything from the hostname onwards This example is easy when you are working without virtual directory. But i am working on virtual directory. Relative URI Absolute URI about.html http://WebReference.com/html/about.html tutorial1/ http://WebReference.com/html/tutorial1/ tutorial1/2.html http://WebReference.com/html/tutorial1/2.html / http://WebReference.com/ //www.internet.com/ http://www.internet.com/ /experts/ http://WebReference.com/experts/ ../ http://WebReference.com/ ../experts/ http://WebReference.com/experts/ ../../../ http://WebReference.com/ ./ http://WebReference.com/html/ ./about.html http://WebReference.com/html/about.html I want to simulate a site below, like my project which is working on virtual directory. These are my aspx and ascx folder http://hostAddress:port/virtualDirectory/MainSite/ASPX/default.aspx http://hostAddress:port/virtualDirectory/MainSite/ASCX/UserCtrl/login.ascx http://hostAddress:port/virtualDirectory/AdminSite/ASPX/ASCX/default.aspx These are my JS Files(which will be use both with the aspx and ascx files): http://hostAddress:port/virtualDirectory/MainSite/JavascriptFolder/jsFile.js http://hostAddress:port/virtualDirectory/AdminSite/JavascriptFolder/jsFile.js this is my static web page address(I want to show some pictures and run inside some js functions): http://hostAddress:port/virtualDirectory/HTMLFiles/page.html this is my image folder http://hostAddress:port/virtualDirectory/Images/PNG/arrow.png http://hostAddress:port/virtualDirectory/Images/GIF/arrow.png if i want to write and image file's link in my ASPX file i should write aspxImgCtrl.ImageUrl = Server.MapPath("~")+"/Images/GIF/arrow.png"; But if i want to write the path hard coded or from javascript file, what kind of url address it should be?

    Read the article

  • can load data(google app enngine) from http://localhost:8100/remote_api ..

    - by zjm1126
    i can download data from gae (http://zjm1126.appspot.com/remote_api), this is code: appcfg.py download_data --application=zjm1126 --url=http://zjm1126.appspot.com/remote_api --filename=a.csv and it successful : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://zjm1 126.appspot.com/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162421 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162421.sql3 [INFO ] Opening database: bulkloader-results-20100618.162421.sql3 [INFO ] Connecting to zjm1126.appspot.com/remote_api Please enter login credentials for zjm1126.appspot.com Email: [email protected] Password for [email protected]: [INFO ] Downloading kinds: [u'LogText', u'Greeting', u'Forum', u'Thread'] .... [INFO ] Have 0 entities, 0 previously transferred [INFO ] 0 entities (8804 bytes) transferred in 11.3 seconds so i want to know can load data from 127.0.0.1 , this is my code : appcfg.py download_data --application=zjm1126 --url=http://localhost:8100/remote_api --filename=a.csv and the error is : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://loca lhost:8100/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162325 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162325.sql3 [INFO ] Opening database: bulkloader-results-20100618.162325.sql3 Please enter login credentials for localhost Email: [email protected] Password for [email protected]: [INFO ] Connecting to localhost:8100/remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 3169, in Run self.request_manager.Authenticate() File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 1178, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "d:\Program Files\Google\google_appengine\google\appengine\ext\remote_api \remote_api_stub.py", line 542, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "d:\Program Files\Google\google_appengine\google\appengine\tools\appengin e_rpc.py", line 346, in Send f = self.opener.open(req) File "D:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "D:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "D:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "D:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "D:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 404: Not Found [INFO ] Authentication Failed so what should i do , thanks

    Read the article

  • OpenCV install problems on Studio 12.04 - broken dependencies

    - by Will
    I'm trying to follow the Ubuntu OpenCV documentation at OpenCV. The provided script has a line which executed for some time, taking away more packages than I expected (such as ubuntu-studio video); sudo apt-get -qq remove ffmpeg x264 libx264-dev When the script gets to the line below, it bombs; sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils ffmpeg The error msg is; E: Unable to correct problems, you have held broken packages. I've since run Update-Manager, run sudo apt-get updates, rebooted, tried the above script line manually, and still no change. I've just run sudo apt-get install -f and nothing seemed to change. It did mention that some packages were no longer needed and could be removed by apt-get autoremove, so I ran that. It removed a number of packages, so I reran the install command above. Still same problem of held broken packages. I just ran sudo apt-get -u dist-upgrade Part of the response was; The following packages have been kept back: gstreamer0.10-ffmpeg I'm not sure what that means. I do know that it shows up in my Update-Manager and cannot be checked I then ran sudo dpkg --configure -a and then reran sudo apt-get -f install and the package was still not upgraded, though there was this very interesting comment; Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-ffmpeg : Depends: libavcodec53 (< 5:0) but it is not going to be installed or libavcodec-extra-53 (< 5:0) but 5:0.7.2-1ubuntu1+codecs1~oneiric2 is to be installed E: Unable to correct problems, you have held broken packages. Then I ran sudo apt-get -u dist-upgrade It showed I had one held package, so I ran; sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade It also exited without upgrading the package, so I ran; sudo apt-get remove --dry-run gstreamer0.10-ffmpeg:i386 And it gave me; *The following packages will be REMOVED: arista gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Remv arista [0.9.7-3ubuntu1] Remv gstreamer0.10-ffmpeg [0.10.12-1ubuntu1]* But when I reran sudo apt-get -u dist-upgrade It showed the package was still there. *The following packages have been kept back: gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.* Update: Just went into Synaptic PM and completely removed gstreamer0.10-ffmpeg Reran sudo apt-get -u dist-upgrade And was told; 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. However, when I ran the original apt-get to install opencv (first code at the top of this question), it still gave me the same broken package errors. So I tried $ cat /etc/apt/sources.list # # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main # deb http://download.opensuse.org/repositories/home:/popinet/xUbuntu_11.04 ./ # disabled on upgrade to precise and then; $ cat /etc/apt/sources.list.d/* But I don't have enough reputation to post the results here (it says I need at least 10 reputation points to post more than 2 links), so I don't know how to provide the requested feedback. Then tried; $ sudo apt-get check [sudo] password for <abcd>: Reading package lists... Done Building dependency tree Reading state information... Done However, no resolution of the problem yet. What else do I need to do? Will an upgrade to Ubuntu Studio 13.xx solve this problem (or compound it)?

    Read the article

  • Useful links for InfoPath form development

    - by ybbest
    Powershell http://msdn.microsoft.com/en-us/library/ms442691.aspx http://technet.microsoft.com/en-us/library/ee806878.aspx http://sharepoint.microsoft.com/blogs/zach/Lists/Posts/Post.aspx?ID=7 InfoPath http://msdn.microsoft.com/en-us/library/aa943232.aspx http://blah.winsmarts.com/2008-8-Deploying_InfoPath_2007_Forms_to_Forms_Server_-and-ndash_Properly.aspx http://msdn.microsoft.com/en-us/library/microsoft.office.infopath.server.administration.xsnfeaturereceiver.aspx http://www.articlesbase.com/web-hosting-articles/sharepoint-hosting-3-ways-to-deploy-infopath-form-templates-to-sharepoint-2605874.html http://jopx.blogspot.com/2006/08/infopath-2007-solving-xsn-can-not-be.html

    Read the article

  • my site works with www.my site.com but not like this http://www.my site.com

    - by toocool
    this is the site that I am going to develop: http://www.juve-news.com/ it works like this but it doesnt when I try to open it without the www prefix. it give me 400 bad request! I have used other web host before now I am trying a new one and I have to add some dns entries like in the picture here: http://cloudcontrol.com/developers/documentation/add-ons/aliases/ I dont know what I have done wrong. If anyone knows what could be the problem then please give me a tip.

    Read the article

  • C# HttpListener Prefix issue with anything other than localhost

    - by jchristner
    Hello, I'm trying to use C# and HttpListener with a prefix of anything other than localhost and it fails (i.e. if I give it "server1", i.e. h t t p : / / l o c a l h o s t : 1 2 3 4 works, but h t t p : / / s e r v e r 1 : 1 2 3 4 fails (sorry, but the site thinks I'm trying to spam... the spaces are there because of that...) The code is... HttpListener listener = new HttpListener(); String prefix = "h t t p : / / s e r v e r 1 : 1 2 3 4/"; listener.Prefixes.Add(prefix); listener.Start(); The failure occurs on listener.Start() with an exception of "Access is denied.". Any ideas? Thanks!

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >