Search Results

Search found 50287 results on 2012 pages for 'http digest'.

Page 128/2012 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • How to configure the roles in my tomcat application to work with JNDI(WIN AUTH)

    - by Itay Levin
    Hi, I'm trying to change the authentication mode of my application from JDBC-REALM to JNDI-REALM. I configured the following section inside the Server.xml <Realm className="org.apache.catalina.realm.JNDIRealm" debug="99" connectionURL="ldap://****:389/DC=onsetinc,DC=com??sAMccountName?sub?(objectClass=*)" connectionName="[email protected]" connectionPassword="password" userBase="CN=Users" referrals="follow" userSearch="(sAMAccountName={0})" userSubtree="true" roleBase="CN=Users" roleName="name" roleSubtree="true" roleSearch="(member={1})"/> I have also configured the web.xml under my appfolder to contain the following: <security-role> <role-name>Admin</role-name> </security-role> <security-role> <role-name>WaterlooUsers</role-name> </security-role> <security-constraint> <web-resource-collection> <web-resource-name>Tube</web-resource-name> <url-pattern>/ComposeMessage.jsp</url-pattern> <url-pattern>/PageStatus.jsp</url-pattern> <url-pattern>/UserStatus.jsp</url-pattern> <url-pattern>/SearchEC.jsp</url-pattern> <url-pattern>/SearchEC2.jsp</url-pattern> <url-pattern>/SearchMessageStatisticsEC.jsp</url-pattern> <url-pattern>/SearchMessageStatus.jsp</url-pattern> <url-pattern>/SearchMessageStatisticsPager.jsp</url-pattern> <url-pattern>/SearchPageStatus.jsp</url-pattern> </web-resource-collection> <auth-constraint> <role-name>WaterlooUsers</role-name> </auth-constraint> </security-constraint> In my Active directory i have created a new group called WaterlooUsers It's distinguish name is : distinguishedName: CN=WaterlooUsers,CN=Users,DC=onsetinc,DC=com It has a property member which contains the following user: member: CN=Itay Levin,CN=Users,DC=onsetinc,DC=com (which is my user) My record on the active directory looks like that: sAMAccountName: itayL distinguishedName: CN=Itay Levin,CN=Users,DC=onsetinc,DC=com memberOf: CN=WaterlooUsers,CN=Users,DC=onsetinc,DC=com and when i get the popup for user/password i enter the username "ItayL" in the authentication message box (and my password) I have 2 questions: How do i configure correctly the roles parameters correctly in the Realm section in the server.xml to enable me to both authenticate and authorize both this group of users WaterlooUsers and also assign them to the appropriate role so that they can see all the relevant pages in my website. - currently it seems that all the Users in my domain are authenticated to the site but get the http-403 Error and can't access any of the pages in the site. I also want to be able to create 2 different set of roles in my site - which can both have access to the same pages - but will see different things on the page. (for instance adding some administrative ability to the admin) Hope it was clear enough and not too long. Thanks in advance, Itay

    Read the article

  • Monorail - Form submission using GET instead of POST

    - by Septih
    Hello, I'm writing some additions to a Castle MonoRail based site involving an Add and an Edit form. The add form works fine and uses POST but the edit form uses GET. The only major difference I can see is that the edit method is called with the Id of the object being edited in the query string. When the submit button is pressed on the edit form, the only argument passed is this object Id again. Here is the code for the edit form: <form action="edit.ashx" method="post"> <h3>Coupon Description</h3> <textarea name="comments" width="200">$comments</textarea> <br/><br/> <h3>Coupon Actions</h3> <br/> <div>Give Stories</div> <ul class="checklist" style="overflow:auto;height:144px;width:100%"> #foreach ($story in $stories.Values) <li> <label> #set ($associated = "") #foreach($storyId in $storyIds) #if($story.Id == $storyId) #set($associated = " checked='true'") #end #end <input type="checkbox" name="chk_${story.Id}" id="chk_${story.Id}" value="true" class="checkbox" $associated/> $story.Name </label> </li> #end </ul> <br/><br/> <div>Give Credit Amount</div> <input type="text" name="credit" value="$credit" /> <br/><br/> <h3>Coupon Uses</h3> <input type="checkbox" name="multi" #if($multi) checked="true" #end /> Multi-Use Coupon?<br/><br/> <b>OR</b> <br/> <br/> Number of Uses per Coupon: <input type="text" name="uses" value="$uses" /> <br/> <input type="submit" name="Save" /> </form> The differences between this and the add form is the velocity stuff to do with association and the values of the inputs being from the PropertyBag. The Method dealing with this on the controller starts like this: public void Edit(int id) { Coupon coupon = Coupon.GetRepository(User.Site.Id).FindById(id).Value; if(coupon == null) { RedirectToReferrer(); return; } IFutureQueryOfList<Story> stories = Story.GetRepository(User.Site.Id).OnlyReturnEnabled().FindAll("Name", true); if (Params["Save"] == null) { ... } } It reliably gets called but a breakpoint on the Params["Save"] lets me see that the HttpMethod is "GET" and the only arguments passed (In the Form and the Request) are the object Id and additional HTTP headers. At the end of the day I'm not that familiar with MonoRail and this may be a stupid mistake on my behalf, but I would very much appreciate being made a fool out of if it fixes the problem! :) Thanks

    Read the article

  • Sometimes big delay when using PHP mail()

    - by Robbert Dam
    I have a website which processes orders from a Windows application. This works as follows: User clicks "Order now" in the windows app App uploads a file with POST to a PHP script The script immediately calls the PHP mail() function (order is not stored in a db) This works fine most of the time. However, sometimes a big delay occurs (several days). Customers calls why the product has not yet been delivered. E-mail headers of delayed mail follows: Microsoft Mail Internet Headers Version 2.0 Received: from barracuda.nkl.nl ([10.0.0.1]) by smtp.nkl.nl with Microsoft SMTPSVC(6.0.3790.3959); Wed, 26 May 2010 16:26:51 +0200 X-ASG-Debug-ID: 1274883818-2f8800000000-X58hIK X-Barracuda-URL: http://10.0.0.1:8000/cgi-bin/mark.cgi Received: from server45.firstfind.nl (localhost [127.0.0.1]) by barracuda.nkl.nl (Spam & Virus Firewall) with ESMTP id ECAFD15776A for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) Received: from server45.firstfind.nl (server45.firstfind.nl [93.94.226.76]) by barracuda.nkl.nl with ESMTP id 85bAT2AU58kkxjPb for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) X-Barracuda-Envelope-From: [email protected] Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 Date: Wed, 19 May 2010 11:47:10 +0200 Message-Id: <[email protected]> To: [email protected] X-ASG-Orig-Subj: easyfit - ref: Hoen3443 Subject: easyfit - ref: Hoen3443 X-PHP-Script: www.nklseminar.nl/emailer/upload.php for 77.61.220.217 From: [email protected] Content-type: text/html X-Virus-Scanned: by amavisd-new X-Barracuda-Connect: server45.firstfind.nl[93.94.226.76] X-Barracuda-Start-Time: 1274883820 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by Barracuda Spam & Virus Firewall at nkl.nl X-Barracuda-Spam-Score: 0.92 X-Barracuda-Spam-Status: No, SCORE=0.92 using global scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=DATE_IN_PAST_96_XX, DATE_IN_PAST_96_XX_2, HTML_MESSAGE, MIME_HEADER_CTYPE_ONLY, MIME_HTML_ONLY, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.2.30817 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 NO_REAL_NAME From: does not include a real name 0.01 DATE_IN_PAST_96_XX Date: is 96 hours or more before Received: date 0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.00 HTML_MESSAGE BODY: HTML included in message 0.86 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME headers 2.07 DATE_IN_PAST_96_XX_2 DATE_IN_PAST_96_XX_2 Return-Path: [email protected] X-OriginalArrivalTime: 26 May 2010 14:26:51.0343 (UTC) FILETIME=[7B80DDF0:01CAFCDF] The delay seems to occur here: Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 I've reported this issue various times to the web hosting service that hosts my website. They say the delay does not occur in their network (impossible). But they do confirm that the e-mail is first seen in their mail server on May 26, which is 7 days after the mail has been composed. The order is marked with the timestamp of the user's local PC, which also matches May 19 (so it's not a PC clock problem) It's also interesting to see that all delayed mails (orders were placed on different days) come in at once. So I suddenly receive 14 e-mail in my mailbox from various days. Any idea were this delay may be introduced? Could there be a bug in my PHP code that causes this? (I cannot believe I can introduce a loop of 7 days in my PHP code)

    Read the article

  • Why is my NSURLConnection so slow?

    - by Bama91
    I have setup an NSURLConnection using the guidelines in Using NSURLConnection from the Mac OS X Reference Library, creating an NSMutableURLRequest as POST to a PHP script with a custom body to upload 20 MB of data (see code below). Note that the post body is what it is (line breaks and all) to exactly match an existing desktop implementation. When I run this in the iPhone simulator, the post is successful, but takes an order of magnitude longer than if I run the equivalent code locally on my Mac in C++ (20 minutes vs. 20 seconds, respectively). Any ideas why the difference is so dramatic? I understand that the simulator will give different results than the actual device, but I would expect at least similar results. Thanks const NSUInteger kDataSizePOST = 20971520; const NSString* kServerBDC = @"WWW.SOMEURL.COM"; const NSString* kUploadURL = @"http://WWW.SOMEURL.COM/php/test/upload.php"; const NSString* kUploadFilename = @"test.data"; const NSString* kUsername = @"testuser"; const NSString* kHostname = @"testhost"; srandom(time(NULL)); NSMutableData *uniqueData = [[NSMutableData alloc] initWithCapacity:kDataSizePOST]; for (unsigned int i = 0 ; i < kDataSizePOST ; ++i) { u_int32_t randomByte = ((random() % 95) + 32); [uniqueData appendBytes:(void*)&randomByte length:1]; } NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init]; [request setURL:[NSURL URLWithString:kUploadURL]]; [request setHTTPMethod:@"POST"]; NSString *boundary = @"aBcDd"; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; NSMutableData *postbody = [NSMutableData data]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[uniqueData length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: form-data; name=test; filename=%@", kUploadFilename] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithString:@";\nContent-Type: multipart/mixed;\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[NSData dataWithData:uniqueData]]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[kUsername length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: inline; name=Username;\n\r\n%@",kUsername] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[kHostname length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: inline; name=Hostname;\n\r\n%@",kHostname] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\n--%@--",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [request setHTTPBody:postbody]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (theConnection) { _receivedData = [[NSMutableData data] retain]; } else { // Inform the user that the connection failed. } [request release]; [uniqueData release];

    Read the article

  • Deploying a PHP Library project with Maven

    - by Marco
    Hi, I've created a PHP Library project using Maven, and I'm now ready for its deployment. Following the instructions at http://www.php-maven.org/deploy.html, something went wrong. The configuration is set to: <descriptorRef>php-lib</descriptorRef> During the execution of mvn deploy I get a list of errors for unfound dependencies in the repository: [INFO] [jar:jar {execution: default-jar}] [INFO] Building jar: /home/marco/projects/php/my-app/target/my-app-1.0-SNAPSHOT.jar [INFO] [plugin:addPluginArtifactMetadata {execution: default-addPluginArtifactMetadata}] Downloading: http://repo1.php-maven.org/release/org/phpmaven/maven-php-plugin/2.2-beta-2/maven-php-plugin-2.2-beta-2.pom [INFO] Unable to find resource 'org.phpmaven:maven-php-plugin:pom:2.2-beta-2' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.maven.org/maven2/org/phpmaven/maven-php-plugin/2.2-beta-2/maven-php-plugin-2.2-beta-2.pom [INFO] Unable to find resource 'org.phpmaven:maven-php-plugin:pom:2.2-beta-2' in repository central (http://repo1.maven.org/maven2) Downloading: http://repo1.php-maven.org/release/org/phpmaven/maven-php-plugin/2.2-beta-2/maven-php-plugin-2.2-beta-2.pom [INFO] Unable to find resource 'org.phpmaven:maven-php-plugin:pom:2.2-beta-2' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.maven.org/maven2/org/phpmaven/maven-php-plugin/2.2-beta-2/maven-php-plugin-2.2-beta-2.pom [INFO] Unable to find resource 'org.phpmaven:maven-php-plugin:pom:2.2-beta-2' in repository central (http://repo1.maven.org/maven2) Downloading: http://repo1.php-maven.org/release/org/apache/maven/wagon/wagon-http-shared/1.0-beta-6/wagon-http-shared-1.0-beta-6.pom [INFO] Unable to find resource 'org.apache.maven.wagon:wagon-http-shared:pom:1.0-beta-6' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.php-maven.org/release/org/apache/maven/wagon/wagon-http-shared/1.0-beta-6/wagon-http-shared-1.0-beta-6.pom [INFO] Unable to find resource 'org.apache.maven.wagon:wagon-http-shared:pom:1.0-beta-6' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.maven.org/maven2/org/apache/maven/wagon/wagon-http-shared/1.0-beta-6/wagon-http-shared-1.0-beta-6.pom Downloading: http://repo1.php-maven.org/release/nekohtml/xercesMinimal/1.9.6.2/xercesMinimal-1.9.6.2.pom [INFO] Unable to find resource 'nekohtml:xercesMinimal:pom:1.9.6.2' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.php-maven.org/release/nekohtml/xercesMinimal/1.9.6.2/xercesMinimal-1.9.6.2.pom [INFO] Unable to find resource 'nekohtml:xercesMinimal:pom:1.9.6.2' in repository release-repo1.php-maven.org (http://repo1.php-maven.org/release) Downloading: http://repo1.maven.org/maven2/nekohtml/xercesMinimal/1.9.6.2/xercesMinimal-1.9.6.2.pom And this is my settings.xml file: <settings> <profiles> <profile> <id>profile-php-maven</id> <pluginRepositories> <pluginRepository> <id>release-repo1.php-maven.org</id> <name>PHP-Maven 2 Release Repository</name> <url>http://repo1.php-maven.org/release</url> <releases> <enabled>true</enabled> </releases> </pluginRepository> <pluginRepository> <id>snapshot-repo1.php-maven.org</id> <name>PHP-Maven 2 Snapshot Repository</name> <url>http://repo1.php-maven.org/snapshot</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> <repositories> <repository> <id>release-repo1.php-maven.org</id> <name>PHP-Maven 2 Release Repository</name> <url>http://repo1.php-maven.org/release</url> <releases> <enabled>true</enabled> </releases> </repository> <repository> <id>snapshot-repo1.php-maven.org</id> <name>PHP-Maven 2 Snapshot Repository</name> <url>http://repo1.php-maven.org/snapshot</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> </profile> </profiles> <activeProfiles> <activeProfile>profile-php-maven</activeProfile> </activeProfiles> </settings> For every step I've followed the documentation (which is poor, though). Any tips? Thanks

    Read the article

  • CakePHP sending multiple lines of same cookie information in response header. why??

    - by Vicer
    Hi all, I am having this problem with one of my cakePHP applications. My application displays a big html table on the page. I noticed that when the table goes beyond a certain size limit, IE cannot display that page. While trying to figure out why this happens, I noticed that my html response header contains a HUGE number of lines repeating the same thing. Response Headers ------------------------------ Date Thu, 20 May 2010 04:18:10 GMT Server Apache/2.0.63 (Win32) mod_ssl/2.0.63 OpenSSL/0.9.7m PHP/5.2.10 X-Powered-By PHP/5.2.10 P3P CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Set-Cookie CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:12 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1; expires=Sun, 20-May-2035 10:18:13 GMT; path=/mywebapp Keep-Alive timeout=15, max=100 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html ========================================================================== Request Headers -------------------------- Host localhost:8080 User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729) Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://localhost:8080/mywebapp/section2/bigtable/1 Cookie CAKEPHP=q4tp37tn9gkhcpgmf1ftr1i6c1 Authorization Basic cGt1bWFyYTpwa3VtYXJh Notice the huge set of repeated lines at 'Set-Cookie' in the Response Header. This only happens when I try to display large tables. Does anyone have any clue to what might be causing this? Any help to find the issue is appreciated. I am using CakePHP 1.2.5. As far as I know, I am not messing with any set cookie functions. But yes I am using session variables.

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • Reverse engineering windows mobile live search CellID location awareness protocol (yikes)...

    - by Jean-Charles
    I wasn't sure of how to form the question so I apologize if the title is misleading. Additionally, you may want to get some coffee and take a seat for this one ... It's long. Basically, I'm trying to reverse engineer the protocol used by the Windows Mobile Live Search application to get location based on cellID. Before I go on, I am aware of other open source services (such as OpenCellID) but this is more for the sake of education and a bit for redundancy. According to the packets I captured, a POST request is made to ... mobile.search.live.com/positionlookupservice_1/service.aspx ... with a few specific headers (agent, content-length, etc) and no body. Once this goes through, the server sends back a 100-Continue response. At this point, the application submits this data (I chopped off the packet header): 00 00 00 01 00 00 00 05 55 54 ........UT 46 2d 38 05 65 6e 2d 55 53 05 65 6e 2d 55 53 01 F-8.en-US.en-US. 06 44 65 76 69 63 65 05 64 75 6d 6d 79 01 06 02 .Device.dummy... 50 4c 08 0e 52 65 76 65 72 73 65 47 65 6f 63 6f PL..ReverseGeoco 64 65 01 07 0b 47 50 53 43 68 69 70 49 6e 66 6f de...GPSChipInfo 01 20 06 09 43 65 6c 6c 54 6f 77 65 72 06 03 43 . ..CellTower..C 47 49 08 03 4d 43 43 b6 02 07 03 4d 4e 43 03 34 GI..MCC....MNC.4 31 30 08 03 4c 41 43 cf 36 08 02 43 49 fd 01 00 10..LAC.6..CI... 00 00 00 ... And receives this in response (packet and HTTP response headers chopped): 00 00 00 01 00 00 00 00 01 06 02 50 4c ...........PL 06 08 4c 6f 63 61 6c 69 74 79 06 08 4c 6f 63 61 ..Locality..Loca 74 69 6f 6e 07 03 4c 61 74 09 34 32 2e 33 37 35 tion..Lat.42.375 36 32 31 07 04 4c 6f 6e 67 0a 2d 37 31 2e 31 35 621..Long.-71.15 38 39 33 38 00 07 06 52 61 64 69 75 73 09 32 30 8938...Radius.20 30 30 2e 30 30 30 30 00 42 07 0c 4c 6f 63 61 6c 00.0000.B..Local 69 74 79 4e 61 6d 65 09 57 61 74 65 72 74 6f 77 ityName.Watertow 6e 07 16 41 64 6d 69 6e 69 73 74 72 61 74 69 76 n..Administrativ 65 41 72 65 61 4e 61 6d 65 0d 4d 61 73 73 61 63 eAreaName.Massac 68 75 73 65 74 74 73 07 10 50 6f 73 74 61 6c 43 husetts..PostalC 6f 64 65 4e 75 6d 62 65 72 05 30 32 34 37 32 07 odeNumber.02472. 0b 43 6f 75 6e 74 72 79 4e 61 6d 65 0d 55 6e 69 .CountryName.Uni 74 65 64 20 53 74 61 74 65 73 00 00 00 ted States... Now, here is what I've determined so far: All strings are prepended with one byte that is the decimal equivalent of their length. There seem to be three different casts that are used throughout the request and response. They show up as one byte before the length byte. I've concluded that the three types map out as follows: 0x06 - parent element (subsequent values are children, closed with 0x00) 0x07 - string 0x08 - int? Based on these determinations, here is what the request and response look like in a more readable manner (values surrounded by brackets denote length and values surrounded by parenthesis denote a cast): \0x00\0x00\0x00\0x01\0x00\0x00\0x00 [5]UTF-8 [5]en-US [5]en-US \0x01 [6]Device [5]dummy \0x01 (6)[2]PL (8)[14]ReverseGeocode\0x01 (7)[11]GPSChipInfo[1]\0x20 (6)[9]CellTower (6)[3]CGI (8)[3]MCC\0xB6\0x02 //310 (7)[3]MNC[3]410 //410 (8)[3]LAC\0xCF\0x36 //6991 (8)[2]CI\0xFD\0x01 //259 \0x00 \0x00 \0x00 \0x00 and.. \0x00\0x00\0x00\0x01\0x00\0x00\0x00 \0x00\0x01 (6)[2]PL (6)[8]Locality (6)[8]Location (7)[3]Lat[9]42.375621 (7)[4]Long[10]-71.158938 \0x00 (7)[6]Radius[9]2000.0000 \0x00 \0x42 //"B" ... Has to do with GSM (7)[12]LocalityName[9]Watertown (7)[22]AdministrativeAreaName[13]Massachusetts (7)[16]PostalCodeNumber[5]02472 (7)[11]CountryName[13]United States \0x00 \0x00\0x00 My analysis seems to work out pretty well except for a few things: The 0x01s throughout confuse me ... At first I thought they were some sort of base level element terminators but I'm not certain. I'm not sure the 7-byte header is, in fact, a seven byte header. I wonder if it's maybe 4 bytes and that the three remaining 0x00s are of some other significance. The trailing 0x00s. Why is it that there is only one on the request but two on the response? The type 8 cast mentioned above ... I can't seem to figure out how those values are being encoded. I added comments to those lines with what the values should correspond to. Any advice on these four points will be greatly appreciated. And yes, these packets were captured in Watertown, MA. :)

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • Problem when trying to update "Duplicate sources.list"

    - by Coca Akat
    I got this problem when trying to update using sudo apt-get update W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ saucy-backports/multiverse amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy-backports_multiverse_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ saucy-backports/multiverse i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy-backports_multiverse_binary-i386_Packages) W: You may want to run apt-get update to correct these problems This is my souces.list : # deb cdrom:[Ubuntu 13.10 _Saucy Salamander_ - Release amd64 (20131016.1)]/ saucy main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu saucy main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu saucy-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy universe deb http://archive.ubuntu.com/ubuntu saucy-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu saucy multiverse deb http://archive.ubuntu.com/ubuntu saucy-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu saucy-security main restricted deb http://archive.ubuntu.com/ubuntu saucy-security universe deb http://archive.ubuntu.com/ubuntu saucy-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu saucy main # deb-src http://extras.ubuntu.com/ubuntu saucy main # deb http://archive.canonical.com/ saucy partner # deb-src http://archive.canonical.com/ saucy partner # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu saucy-backports multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software.

    Read the article

  • WIF-less claim extraction from ACS: JWT

    - by Elton Stoneman
    ACS support for JWT still shows as "beta", but it meets the spec and it works nicely, so it's becoming the preferred option as SWT is losing favour. (Note that currently ACS doesn’t support JWT encryption, if you want encrypted tokens you need to go SAML). In my last post I covered pulling claims from an ACS token without WIF, using the SWT format. The JWT format is a little more complex, but you can still inspect claims just with string manipulation. The incoming token from ACS is still presented in the BinarySecurityToken element of the XML payload, with a TokenType of urn:ietf:params:oauth:token-type:jwt: <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:39:55.337Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:19:55.337Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="_1eeb5cf4-b40b-40f2-89e0-a3343f6bd985-6A15D1EED0CDB0D8FA48C7D566232154" ValueType="urn:ietf:params:oauth:token-type:jwt" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">[ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>urn:ietf:params:oauth:token-type:jwt</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> The token as a whole needs to be base-64 decoded. The decoded value contains a header, payload and signature, dot-separated; the parts are also base-64, but they need to be decoded using a no-padding algorithm (implementation and more details in this MSDN article on validating an Exchange 2013 identity token). The values are then in JSON; the header contains the token type and the hashing algorithm: "{"typ":"JWT","alg":"HS256"}" The payload contains the same data as in the SWT, but JSON rather than querystring format: {"aud":"http://localhost/x.y.z" "iss":"https://adfstest-bhw.accesscontrol.windows.net/" "nbf":1346398795 "exp":1346404795 "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant":"2012-08-31T07:39:53.652Z" "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows" "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname":"xyz" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":"[email protected]" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"[email protected]" "identityprovider":"http://fs.svc.x.y.z.com/adfs/services/trust"} The signature is in the third part of the token. Unlike SWT which is fixed to HMAC-SHA-256, JWT can support other protocols (the one in use is specified as the "alg" value in the header). How to: Validate an Exchange 2013 identity token contains an implementation of a JWT parser and validator; apart from the custom base-64 decoding part, it’s very similar to SWT extraction. I've wrapped the basic SWT and JWT in a ClaimInspector.aspx page on gitHub here: SWT and JWT claim inspector. You can drop it into any ASP.Net site and set the URL to be your redirect page in ACS. Swap ACS to issue SWT or JWT, and using the same page you can inspect the claims that come out.

    Read the article

  • Linux, some packets are not being NAT

    - by user70932
    Hi, I'm trying to NAT HTTP traffic, I'm new to this and facing some issues. What i'm trying to do is NAT client HTTP requests to a webserver. CLIENT - NAT BOX - WEBSERVER When the client open the IP of the NAT BOX, the request should be pass to the web server. But I'm getting "HTTP request sent, awaiting response..." and then wait serveral minutes before the request is done. Looking at the tcpdump output, it looks like the first Syn packet on (10:48:54) is being NAT but not the second, third, fourth... ACK or PSH packets, and wait until (10:52:04) it starts NAT again on the ACK packet. The iptables command I'm using is: iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 \ -j DNAT --to-destination WEBSERVER I'm wondering what could have caused this behavior? Thanks alot. 10:48:54.907861 IP (tos 0x0, ttl 49, id 16395, offset 0, flags [DF], proto: TCP (6), length: 48) CLIENT.61736 > NATBOX.http: S, cksum 0x6019 (correct), 1589600740:1589600740(0) win 5840 <mss 1460,nop,wscale 8> 10:48:54.907874 IP (tos 0x0, ttl 48, id 16395, offset 0, flags [DF], proto: TCP (6), length: 48) CLIENT.61736 > WEBSERVER.http: S, cksum 0xb5d7 (correct), 1589600740:1589600740(0) win 5840 <mss 1460,nop,wscale 8> 10:48:55.102696 IP (tos 0x0, ttl 49, id 16397, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x2727 (correct), ack 2950613896 win 23 10:48:55.102963 IP (tos 0x0, ttl 49, id 16399, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:48:58.103078 IP (tos 0x0, ttl 49, id 16401, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:48:58.366344 IP (tos 0x0, ttl 49, id 16403, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:49:04.103204 IP (tos 0x0, ttl 49, id 16405, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:49:04.363943 IP (tos 0x0, ttl 49, id 16407, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:49:16.101583 IP (tos 0x0, ttl 49, id 16409, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:49:16.363475 IP (tos 0x0, ttl 49, id 16411, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:49:40.100796 IP (tos 0x0, ttl 49, id 16413, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:49:40.563898 IP (tos 0x0, ttl 49, id 16415, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:50:28.099396 IP (tos 0x0, ttl 49, id 16417, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:50:28.761678 IP (tos 0x0, ttl 49, id 16419, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x26af (correct), ack 1 win 23 10:52:04.093668 IP (tos 0x0, ttl 49, id 16421, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > NATBOX.http: P 0:120(120) ack 1 win 23 10:52:04.093678 IP (tos 0x0, ttl 48, id 16421, offset 0, flags [DF], proto: TCP (6), length: 160) CLIENT.61736 > WEBSERVER.http: P 1589600741:1589600861(120) ack 2950613896 win 23 10:52:04.291021 IP (tos 0x0, ttl 49, id 16423, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x25d3 (correct), ack 217 win 27 10:52:04.291028 IP (tos 0x0, ttl 48, id 16423, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > WEBSERVER.http: ., cksum 0x7b91 (correct), ack 217 win 27 10:52:04.300708 IP (tos 0x0, ttl 49, id 16425, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x253c (correct), ack 368 win 27 10:52:04.300714 IP (tos 0x0, ttl 48, id 16425, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > WEBSERVER.http: ., cksum 0x7afa (correct), ack 368 win 27 10:52:04.301417 IP (tos 0x0, ttl 49, id 16427, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: F, cksum 0x253b (correct), 120:120(0) ack 368 win 27 10:52:04.301438 IP (tos 0x0, ttl 48, id 16427, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > WEBSERVER.http: F, cksum 0x7af9 (correct), 120:120(0) ack 368 win 27 10:52:04.498875 IP (tos 0x0, ttl 49, id 16429, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > NATBOX.http: ., cksum 0x253a (correct), ack 369 win 27 10:52:04.498881 IP (tos 0x0, ttl 48, id 16429, offset 0, flags [DF], proto: TCP (6), length: 40) CLIENT.61736 > WEBSERVER.http: ., cksum 0x7af8 (correct), ack 369 win 27

    Read the article

  • Error data: line 2 column 1 when using pycurl with gzip stream

    - by Sagar Hatekar
    Thanks for reading. Background: I am trying to read a streaming API feed that returns data in JSON format, and then storing this data to a pymongo collection. The streaming API requires a "Accept-Encoding" : "Gzip" header. What's happening: Code fails on json.loads and outputs - Extra data: line 2 column 1 - line 4 column 1 (char 1891 - 5597) (Refer Error Log below) This does NOT happen while parsing every JSON object - it happens at random. My guess is I am encountering some weird JSON object after every "x" proper JSON objects. I did reference how to use pycurl if requested data is sometimes gzipped, sometimes not? and Encoding error while deserializing a json object from Google but so far have been unsuccessful at resolving this error. Could someone please help me out here? Error Log: Note: The raw dump of the JSON object below is basically using the repr() method. '{"id":"tag:search.twitter.com,2005:207958320747782146","objectType":"activity","actor":{"objectType":"person","id":"id:twitter.com:493653150","link":"http://www.twitter.com/Deathnews_7_24","displayName":"Death News 7/24","postedTime":"2012-02-16T01:30:12.000Z","image":"http://a0.twimg.com/profile_images/1834408513/deathnewstwittersquare_normal.jpg","summary":"Crashes, Murders, Suicides, Accidents, Crime and Naturals Death News From All Around World","links":[{"href":"http://www.facebook.com/DeathNews724","rel":"me"}],"friendsCount":56,"followersCount":14,"listedCount":1,"statusesCount":1029,"twitterTimeZone":null,"utcOffset":null,"preferredUsername":"Deathnews_7_24","languages":["tr"]},"verb":"post","postedTime":"2012-05-30T22:15:02.000Z","generator":{"displayName":"web","link":"http://twitter.com"},"provider":{"objectType":"service","displayName":"Twitter","link":"http://www.twitter.com"},"link":"http://twitter.com/Deathnews_7_24/statuses/207958320747782146","body":"Kathi Kamen Goldmark, Writers\xe2\x80\x99 Catalyst, Dies at 63 http://t.co/WBsNlNtA","object":{"objectType":"note","id":"object:search.twitter.com,2005:207958320747782146","summary":"Kathi Kamen Goldmark, Writers\xe2\x80\x99 Catalyst, Dies at 63 http://t.co/WBsNlNtA","link":"http://twitter.com/Deathnews_7_24/statuses/207958320747782146","postedTime":"2012-05-30T22:15:02.000Z"},"twitter_entities":{"urls":[{"display_url":"nytimes.com/2012/05/30/boo\xe2\x80\xa6","indices":[52,72],"expanded_url":"http://www.nytimes.com/2012/05/30/books/kathi-kamen-goldmark-writers-catalyst-dies-at-63.html","url":"http://t.co/WBsNlNtA"}],"hashtags":[],"user_mentions":[]},"gnip":{"language":{"value":"en"},"matching_rules":[{"value":"url_contains: nytimes.com","tag":null}],"klout_score":11,"urls":[{"url":"http://t.co/WBsNlNtA","expanded_url":"http://www.nytimes.com/2012/05/30/books/kathi-kamen-goldmark-writers-catalyst-dies-at-63.html?_r=1"}]}}\r\n{"id":"tag:search.twitter.com,2005:207958321003638785","objectType":"activity","actor":{"objectType":"person","id":"id:twitter.com:178760897","link":"http://www.twitter.com/Mobanu","displayName":"Donald Ochs","postedTime":"2010-08-15T16:33:56.000Z","image":"http://a0.twimg.com/profile_images/1493224811/small_mobany_Logo_normal.jpg","summary":"","links":[{"href":"http://www.mobanuweightloss.com","rel":"me"}],"friendsCount":10272,"followersCount":9698,"listedCount":30,"statusesCount":725,"twitterTimeZone":"Mountain Time (US & Canada)","utcOffset":"-25200","preferredUsername":"Mobanu","languages":["en"],"location":{"objectType":"place","displayName":"Crested Butte, Colorado"}},"verb":"post","postedTime":"2012-05-30T22:15:02.000Z","generator":{"displayName":"twitterfeed","link":"http://twitterfeed.com"},"provider":{"objectType":"service","displayName":"Twitter","link":"http://www.twitter.com"},"link":"http://twitter.com/Mobanu/statuses/207958321003638785","body":"Mobanu: Can Exercise Be Bad for You?: Researchers have found evidence that some people who exercise do worse on ... http://t.co/mTsQlNQO","object":{"objectType":"note","id":"object:search.twitter.com,2005:207958321003638785","summary":"Mobanu: Can Exercise Be Bad for You?: Researchers have found evidence that some people who exercise do worse on ... http://t.co/mTsQlNQO","link":"http://twitter.com/Mobanu/statuses/207958321003638785","postedTime":"2012-05-30T22:15:02.000Z"},"twitter_entities":{"urls":[{"display_url":"nyti.ms/KUmmMa","indices":[116,136],"expanded_url":"http://nyti.ms/KUmmMa","url":"http://t.co/mTsQlNQO"}],"hashtags":[],"user_mentions":[]},"gnip":{"language":{"value":"en"},"matching_rules":[{"value":"url_contains: nytimes.com","tag":null}],"klout_score":12,"urls":[{"url":"http://t.co/mTsQlNQO","expanded_url":"http://well.blogs.nytimes.com/2012/05/30/can-exercise-be-bad-for-you/?utm_medium=twitter&utm_source=twitterfeed"}]}}\r\n' json exception: Extra data: line 2 column 1 - line 4 column 1 (char 1891 - 5597) Header Output: HTTP/1.1 200 OK Content-Type: application/json; charset=UTF-8 Vary: Accept-Encoding Date: Wed, 30 May 2012 22:14:48 UTC Connection: close Transfer-Encoding: chunked Content-Encoding: gzip get_stream.py: #!/usr/bin/env python import sys import pycurl import json import pymongo STREAM_URL = "https://stream.test.com:443/accounts/publishers/twitter/streams/track/Dev.json" AUTH = "userid:passwd" DB_HOST = "127.0.0.1" DB_NAME = "stream_test" class StreamReader: def __init__(self): try: self.count = 0 self.buff = "" self.mongo = pymongo.Connection(DB_HOST) self.db = self.mongo[DB_NAME] self.raw_tweets = self.db["raw_tweets_gnip"] self.conn = pycurl.Curl() self.conn.setopt(pycurl.ENCODING, 'gzip') self.conn.setopt(pycurl.URL, STREAM_URL) self.conn.setopt(pycurl.USERPWD, AUTH) self.conn.setopt(pycurl.WRITEFUNCTION, self.on_receive) self.conn.setopt(pycurl.HEADERFUNCTION, self.header_rcvd) while True: self.conn.perform() except Exception as ex: print "error ocurred : %s" % str(ex) def header_rcvd(self, header_data): print header_data def on_receive(self, data): temp_data = data self.buff += data if data.endswith("\r\n") and self.buff.strip(): try: tweet = json.loads(self.buff, encoding = 'UTF-8') self.buff = "" if tweet: try: self.raw_tweets.insert(tweet) except Exception as insert_ex: print "Error inserting tweet: %s" % str(insert_ex) self.count += 1 if self.count % 10 == 0: print "inserted "+str(self.count)+" tweets" except Exception as json_ex: print "json exception: %s" % str(json_ex) print repr(temp_data) stream = StreamReader()

    Read the article

  • Sharepoint list fields stored in content database?

    - by nav
    Hi, I am trying to find out where data for Sharepoint list fields are stored in the content database. For example from the AllLists table filtering on the listid i am intrested in I can derive the following from the tp_ContentTypes column on the field I'm interested in: <Field Type="CascadingDropDownListFieldWithFilter" DisplayName="Secondary Subject" Required="FALSE" ID="{b4143ff9-d5a4-468f-8793-7bb2f06b02a0}" SourceID="{6c1e9bbf-4f02-49fd-8e6c-87dd9f26158a}" StaticName="Secondary_x0020_Subject" Name="Secondary_x0020_Subject" ColName="nvarchar13" RowOrdinal="0" Version="1"><Customization><ArrayOfProperty><Property><Name>SiteUrl</Name><Value xmlns:q1="http://www.w3.org/2001/XMLSchema" p4:type="q1:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">http://lvis.amberlnk.net/knowhow</Value></Property><Property><Name>CddlName</Name><Value xmlns:q2="http://www.w3.org/2001/XMLSchema" p4:type="q2:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">SS</Value></Property><Property><Name>CddlParentName</Name><Value xmlns:q3="http://www.w3.org/2001/XMLSchema" p4:type="q3:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">PS</Value></Property><Property><Name>CddlChildName</Name><Value xmlns:q4="http://www.w3.org/2001/XMLSchema" p4:type="q4:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance"/></Property><Property><Name>ListName</Name><Value xmlns:q5="http://www.w3.org/2001/XMLSchema" p4:type="q5:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Secondary</Value></Property><Property><Name>ListTextField</Name><Value xmlns:q6="http://www.w3.org/2001/XMLSchema" p4:type="q6:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Title</Value></Property><Property><Name>ListValueField</Name><Value xmlns:q7="http://www.w3.org/2001/XMLSchema" p4:type="q7:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Title</Value></Property><Property><Name>JoinField</Name><Value xmlns:q8="http://www.w3.org/2001/XMLSchema" p4:type="q8:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Primary</Value></Property><Property><Name>FilterField</Name><Value xmlns:q9="http://www.w3.org/2001/XMLSchema" p4:type="q9:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Content Type ID</Value></Property><Property><Name>FilterOperator</Name><Value xmlns:q10="http://www.w3.org/2001/XMLSchema" p4:type="q10:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Show All</Value></Property><Property><Name>FilterValue</Name><Value xmlns:q11="http://www.w3.org/2001/XMLSchema" p4:type="q11:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance"/></Property></ArrayOfProperty></Customization></Field> Which table do I need to query to find the data held on this field? Many Thanks Nav

    Read the article

  • Usefulness of packets in wireshark? SSDP protocol, rather than HTTP?

    - by Chris
    I used to be able to filter my wireshark packets to get useful information from them. However, with my current configuration on OSX, all of the HTTP traffic is coming through as the SSDP protocol and is generally being unhelpful. Why is this? Actually, it seems that packets on my own system that should be HTTP are coming throuhg as HTTP, but packets from other machines that should be HTTP are coming through as this protocol.

    Read the article

  • Configure VirtualHost to Rewrite HTTP://subdomain... to HTTPS://internaldirectory

    - by David Kaczynski
    How do I configure Apache to rewrite an http request for a subdomain to an https request for the correct directory? For example, I have the following VirtualHost configuration: However, this turns http://redmine...us into https://redmine...us/redmine. Also, changing RewriteRule ^(.*)$ https://%{HTTP_HOST}/redmine [R] to RewriteRule ^(.*)$ https://%{HTTP_HOST} [R] seems to simply redirect the HTTP request to HTTP://...us, which is currently the default /var/www/index.html page. Any suggestions?

    Read the article

  • How to redirect Cisco IOS's show output to HTTP URL?

    - by yegle
    I found there's a redirect output modifier of Cisco IOS ( version 12.2(53)SE1 ), and there's http: URI support: #sh version | redirect ? flash: Uniform Resource Locator ftp: Uniform Resource Locator http: Uniform Resource Locator https: Uniform Resource Locator nvram: Uniform Resource Locator rcp: Uniform Resource Locator scp: Uniform Resource Locator tftp: Uniform Resource Locator However, I cannot find any document on cisco.com about the http support. I tried sh version | redirect http://my_server/ and cannot find any information on my_server's access log. Can anyone give me a hint?

    Read the article

  • Apache2 graceful restart stops proxying requests to passenger

    - by Rob
    Issue with apache mod proxy, it stops proxying requests after a graceful restart but not all the time. It seems to happen only on a Sunday when a graceful restart is triggered by logrotate. [Sun Sep 9 05:25:06 2012] [notice] SIGUSR1 received. Doing graceful restart [Sun Sep 9 05:25:06 2012] [notice] Apache/2.2.22 (Ubuntu) Phusion_Passenger/3.0.11 configured -- resuming normal operations [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(492) failed in child 26153 for worker proxy:reverse [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(486) failed in child 26153 for worker http://api.myservice.org/api [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(487) failed in child 26153 for worker http://api.myservice.org/editor/$1 [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(489) failed in child 26153 for worker http://api.myservice.org/build [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(490) failed in child 26153 for worker http://api.myservice.org/help [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(491) failed in child 26153 for worker http://api.myservice.org/motd.html [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(480) failed in child 26153 for worker http://api.myservice.org/api [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(481) failed in child 26153 for worker http://api.myservice.org/editor/$1 [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(483) failed in child 26153 for worker http://api.myservice.org/build [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(484) failed in child 26153 for worker http://api.myservice.org/help [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(485) failed in child 26153 for worker http://api.myservice.org/motd.html [Sun Sep 9 05:25:06 2012] [error] proxy: ap_get_scoreboard_lb(479) failed in child 26153 for worker http://api.myservice.org/motd.html After these lines, the logs are flooded with 404's because the requests are not being proxied. It's worth noting that the destination is just another vhost on the same apache instance, but the vhost (http://api.myservice.org) is serving passenger (mod_rails) I was thinking that maybe there's some startup issues with the passenger workers not being ready during a graceful restart? After a full restart resolves it and everything returns to normal. //Edit Here's the vhost config, thanks :) <VirtualHost *:80> UseCanonicalName Off LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon <Directory /var/www/vhosts> RewriteEngine on AllowOverride All </Directory> RewriteEngine on RewriteCond /var/www/vhosts/%{SERVER_NAME} !-d RewriteCond /var/www/vhosts/%{SERVER_NAME} !-l RewriteRule ^ http://sitenotfound.myservice.org/ [R=302,L] VirtualDocumentRoot /var/www/vhosts/%0/current # Rewrite requests to /assets to map to the /var/file-store/<SERVER_NAME>/ RewriteMap lowercase int:tolower RewriteCond %{REQUEST_URI} ^/assets/ RewriteRule ^/assets/(.*)$ /var/file-store/${lowercase:%{SERVER_NAME}}/$1 # Map /login to /editor.html as it's far friendlier. RewriteCond %{REQUEST_URI} ^/login RewriteRule .* /editor.html [PT] # Forward some requests to the API ProxyPass /api http://api.myservice.org/api ProxyPass /site.json http://api.myservice.org/api/editor/site ProxyPassMatch ^/editor/(.*)$ http://api.myservice.org/editor/$1 ProxyPassMatch ^/api/(.*) http://api.myservice.org/api/$1 ProxyPass /build http://api.myservice.org/build ProxyPass /help http://api.myservice.org/help ProxyPass /motd.html http://api.myservice.org/motd.html <Proxy *> Order allow,deny Allow from all </Proxy> # TODO generate slightly more specific Error Documents for 401/403/500's, # but for now the 404 page is good enough ErrorDocument 401 /404.html ErrorDocument 403 /404.html ErrorDocument 404 /404.html ErrorDocument 500 /404.html </VirtualHost>

    Read the article

  • High Profile ASP.NET websites

    - by nandos
    About twice a month I get asked to justify the reason "Why are we using ASP.NET and not PHP or Java, or buzz-word-of-the-month-here, etc". 100% of the time the questions come from people that do not understand anything about technology. People that would not know the difference between FTP and HTTP. The best approach I found (so far) to justify it to people without getting into technical details is to just say "XXX website uses it". Which I get back "Oh...I did not know that, so ASP.NET must be good". I know, I know, it hurts. But it works. So, without getting into the merit of why I'm using ASP.NET (which could trigger an endless argument for other platforms), I'm trying to compile a list of high profile websites that are implemented in ASP.NET. (No, they would have no idea what StackOverflow is). Can you name a high-profile website implemented in ASP.NET? EDIT: Current list (thanks for all the responses): (trying to avoid tech sites and prioritizing retail sites) Costco - http://www.costco.com/ Crate & Barrel - http://www.crateandbarrel.com/ Home Shopping Network - http://www.hsn.com/ Buy.com - http://www.buy.com/ Dell - http://www.dell.com Nasdaq - http://www.nasdaq.com/ Virgin - http://www.virgin.com/ 7-Eleven - http://www.7-eleven.com/ Carnival Cruise Lines - http://www.carnival.com/ L'Oreal - http://www.loreal.com/ The White House - http://www.whitehouse.gov/ Remax - http://www.remax.com/ Monster Jobs - http://www.monster.com/ USA Today - http://www.usatoday.com/ ComputerJobs.com - http://computerjobs.com/ Match.com - http://www.match.com National Health Services (UK) - http://www.nhs.uk/ CarrerBuilder.com - http://www.careerbuilder.com/

    Read the article

  • getting base url of web site's root (absolute/relative url)

    - by uzay95
    I want to completely understand how to use relative and absolute url address in static and dynamic files. ~ : / : .. : in a relative URL indicates the parent directory . : refers to the current directory / : always replaces the entire pathname of the base URL // : always replaces everything from the hostname onwards This example is easy when you are working without virtual directory. But i am working on virtual directory. Relative URI Absolute URI about.html http://WebReference.com/html/about.html tutorial1/ http://WebReference.com/html/tutorial1/ tutorial1/2.html http://WebReference.com/html/tutorial1/2.html / http://WebReference.com/ //www.internet.com/ http://www.internet.com/ /experts/ http://WebReference.com/experts/ ../ http://WebReference.com/ ../experts/ http://WebReference.com/experts/ ../../../ http://WebReference.com/ ./ http://WebReference.com/html/ ./about.html http://WebReference.com/html/about.html I want to simulate a site below, like my project which is working on virtual directory. These are my aspx and ascx folder http://hostAddress:port/virtualDirectory/MainSite/ASPX/default.aspx http://hostAddress:port/virtualDirectory/MainSite/ASCX/UserCtrl/login.ascx http://hostAddress:port/virtualDirectory/AdminSite/ASPX/ASCX/default.aspx These are my JS Files(which will be use both with the aspx and ascx files): http://hostAddress:port/virtualDirectory/MainSite/JavascriptFolder/jsFile.js http://hostAddress:port/virtualDirectory/AdminSite/JavascriptFolder/jsFile.js this is my static web page address(I want to show some pictures and run inside some js functions): http://hostAddress:port/virtualDirectory/HTMLFiles/page.html this is my image folder http://hostAddress:port/virtualDirectory/Images/PNG/arrow.png http://hostAddress:port/virtualDirectory/Images/GIF/arrow.png if i want to write and image file's link in my ASPX file i should write aspxImgCtrl.ImageUrl = Server.MapPath("~")+"/Images/GIF/arrow.png"; But if i want to write the path hard coded or from javascript file, what kind of url address it should be?

    Read the article

  • can load data(google app enngine) from http://localhost:8100/remote_api ..

    - by zjm1126
    i can download data from gae (http://zjm1126.appspot.com/remote_api), this is code: appcfg.py download_data --application=zjm1126 --url=http://zjm1126.appspot.com/remote_api --filename=a.csv and it successful : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://zjm1 126.appspot.com/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162421 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162421.sql3 [INFO ] Opening database: bulkloader-results-20100618.162421.sql3 [INFO ] Connecting to zjm1126.appspot.com/remote_api Please enter login credentials for zjm1126.appspot.com Email: [email protected] Password for [email protected]: [INFO ] Downloading kinds: [u'LogText', u'Greeting', u'Forum', u'Thread'] .... [INFO ] Have 0 entities, 0 previously transferred [INFO ] 0 entities (8804 bytes) transferred in 11.3 seconds so i want to know can load data from 127.0.0.1 , this is my code : appcfg.py download_data --application=zjm1126 --url=http://localhost:8100/remote_api --filename=a.csv and the error is : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://loca lhost:8100/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162325 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162325.sql3 [INFO ] Opening database: bulkloader-results-20100618.162325.sql3 Please enter login credentials for localhost Email: [email protected] Password for [email protected]: [INFO ] Connecting to localhost:8100/remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 3169, in Run self.request_manager.Authenticate() File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 1178, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "d:\Program Files\Google\google_appengine\google\appengine\ext\remote_api \remote_api_stub.py", line 542, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "d:\Program Files\Google\google_appengine\google\appengine\tools\appengin e_rpc.py", line 346, in Send f = self.opener.open(req) File "D:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "D:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "D:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "D:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "D:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 404: Not Found [INFO ] Authentication Failed so what should i do , thanks

    Read the article

  • OpenCV install problems on Studio 12.04 - broken dependencies

    - by Will
    I'm trying to follow the Ubuntu OpenCV documentation at OpenCV. The provided script has a line which executed for some time, taking away more packages than I expected (such as ubuntu-studio video); sudo apt-get -qq remove ffmpeg x264 libx264-dev When the script gets to the line below, it bombs; sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils ffmpeg The error msg is; E: Unable to correct problems, you have held broken packages. I've since run Update-Manager, run sudo apt-get updates, rebooted, tried the above script line manually, and still no change. I've just run sudo apt-get install -f and nothing seemed to change. It did mention that some packages were no longer needed and could be removed by apt-get autoremove, so I ran that. It removed a number of packages, so I reran the install command above. Still same problem of held broken packages. I just ran sudo apt-get -u dist-upgrade Part of the response was; The following packages have been kept back: gstreamer0.10-ffmpeg I'm not sure what that means. I do know that it shows up in my Update-Manager and cannot be checked I then ran sudo dpkg --configure -a and then reran sudo apt-get -f install and the package was still not upgraded, though there was this very interesting comment; Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-ffmpeg : Depends: libavcodec53 (< 5:0) but it is not going to be installed or libavcodec-extra-53 (< 5:0) but 5:0.7.2-1ubuntu1+codecs1~oneiric2 is to be installed E: Unable to correct problems, you have held broken packages. Then I ran sudo apt-get -u dist-upgrade It showed I had one held package, so I ran; sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade It also exited without upgrading the package, so I ran; sudo apt-get remove --dry-run gstreamer0.10-ffmpeg:i386 And it gave me; *The following packages will be REMOVED: arista gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Remv arista [0.9.7-3ubuntu1] Remv gstreamer0.10-ffmpeg [0.10.12-1ubuntu1]* But when I reran sudo apt-get -u dist-upgrade It showed the package was still there. *The following packages have been kept back: gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.* Update: Just went into Synaptic PM and completely removed gstreamer0.10-ffmpeg Reran sudo apt-get -u dist-upgrade And was told; 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. However, when I ran the original apt-get to install opencv (first code at the top of this question), it still gave me the same broken package errors. So I tried $ cat /etc/apt/sources.list # # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main # deb http://download.opensuse.org/repositories/home:/popinet/xUbuntu_11.04 ./ # disabled on upgrade to precise and then; $ cat /etc/apt/sources.list.d/* But I don't have enough reputation to post the results here (it says I need at least 10 reputation points to post more than 2 links), so I don't know how to provide the requested feedback. Then tried; $ sudo apt-get check [sudo] password for <abcd>: Reading package lists... Done Building dependency tree Reading state information... Done However, no resolution of the problem yet. What else do I need to do? Will an upgrade to Ubuntu Studio 13.xx solve this problem (or compound it)?

    Read the article

  • Useful links for InfoPath form development

    - by ybbest
    Powershell http://msdn.microsoft.com/en-us/library/ms442691.aspx http://technet.microsoft.com/en-us/library/ee806878.aspx http://sharepoint.microsoft.com/blogs/zach/Lists/Posts/Post.aspx?ID=7 InfoPath http://msdn.microsoft.com/en-us/library/aa943232.aspx http://blah.winsmarts.com/2008-8-Deploying_InfoPath_2007_Forms_to_Forms_Server_-and-ndash_Properly.aspx http://msdn.microsoft.com/en-us/library/microsoft.office.infopath.server.administration.xsnfeaturereceiver.aspx http://www.articlesbase.com/web-hosting-articles/sharepoint-hosting-3-ways-to-deploy-infopath-form-templates-to-sharepoint-2605874.html http://jopx.blogspot.com/2006/08/infopath-2007-solving-xsn-can-not-be.html

    Read the article

  • my site works with www.my site.com but not like this http://www.my site.com

    - by toocool
    this is the site that I am going to develop: http://www.juve-news.com/ it works like this but it doesnt when I try to open it without the www prefix. it give me 400 bad request! I have used other web host before now I am trying a new one and I have to add some dns entries like in the picture here: http://cloudcontrol.com/developers/documentation/add-ons/aliases/ I dont know what I have done wrong. If anyone knows what could be the problem then please give me a tip.

    Read the article

  • Corrupt install of Ruby?

    - by wilhelmtell
    I'm having issues requiring 'digest/sha1'. ~$ ./configure --prefix=$HOME/usr --program-suffix=19 --enable-shared ~$ make ~$ make install ~$ irb19 irb(main):001:0> require 'digest/sha1' LoadError: dlopen(/Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle, 9): Symbol not found: _rb_Digest_SHA1_Finish Referenced from: /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle Expected in: flat namespace - /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle from (irb):1:in `require' from (irb):1 from /Users/matan/usr/bin/irb19:12:in `<main>' irb(main):002:0> I know some standard modules require fine, while others don't. If i'd say require 'digest' then that works fine. Just a couple of days ago I uninstalled Ruby and re-installed it. When I first installed Ruby I installed it in a similar manner, from source prefixed at my local $HOME/usr directory. I tried removing each and every file make install installs, then re-installing, but that didn't help. Do you have an idea what the issue is and how to resolve it?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >