Search Results

Search found 11367 results on 455 pages for 'pages 09'.

Page 77/455 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Problem finding the difference in days between two dates

    - by James
    I have been using a tidy little routine that I found here to calculate the difference in days between two dates in AS3. I am getting some strange results and I am wondering if any of you inter-codal-mega-lords can shed some light? Why is Q1 of 2010 coming up one day short, when in all other cases the routine is performing fine? Many thanks in advance to anyone who can help! function countDays( startDate:Date, endDate:Date ):int { var oneDay:int = 24*60*60*1000; // hours*minutes*seconds*milliseconds var diffDays:int = Math.abs((startDate.getTime() - endDate.getTime())/(oneDay)); return diffDays; } countDays( new Date( 2010, 00, 01 ), new Date( 2011, 00, 01 ) ); // returns 365, which is correct countDays( new Date( 2010, 00, 01 ), new Date( 2010, 03, 01 ) ); // returns 89, which is 1 day short countDays( new Date( 2010, 03, 01 ), new Date( 2010, 06, 01 ) ); // returns 91, which is correct countDays( new Date( 2010, 06, 01 ), new Date( 2010, 09, 01 ) ); // returns 92, which is correct countDays( new Date( 2010, 09, 01 ), new Date( 2011, 00, 01 ) ); // returns 92, which is correct

    Read the article

  • PHP While loop seperating unique categories from multiple 'Joined' tables

    - by Hob
    I'm pretty new to Joins so hope this all makes sense. I'm joining 4 tables and want to create a while loop that spits out results nested under different categories. My Tables categories id | category_name pages id | page_name | category *page_content* id | page_id | image_id images id | thumb_path My current SQL join <?php $all_photos = mysql_query(" SELECT * FROM categories JOIN pages ON pages.category = categories.id JOIN image_pages ON image_pages.page_id = pages.id JOIN images ON images.id = image_pages.image_id ");?> The result I want from a while loop I would like to get something like this.... Category 1 page 1 Image 1, image 2, image 3 page 2 Image 2, image 4 Category 2 page 3 image 1 page 4 image 1, image 2, image 3 I hope that makes sense. Each image can fall under multiple pages and each page can fall under multiple categories. at the moment I have 2 solutions, one which lists each category several times according to the the amount of pages inside them: eg. category 1, page 1, image 1 - category 1, page 1, image 2 etc One that uses a while loop inside another while loop inside another while loop, resulting in 3 sql queries. <?php while($all_page = mysql_fetch_array($all_pages)) { ?> <p><?=$all_page['page_name']?></p> <?php $all_images = mysql_query("SELECT * FROM images JOIN image_pages ON image_pages.page_id = " . $all_page['id'] . " AND image_pages.image_id = images.id"); ?> <div class="admin-images-block clearfix"> <?php while($all_image = mysql_fetch_array($all_images)) { ?> <img src="<?=$all_image['thumb_path']?>" alt="<?=$all_image['title']?>"/> <?php } ?> </div> <?php } } ?

    Read the article

  • SEO Problem for new dictionary site, google hasn't indexed content.

    - by John
    I loaded about 15,000 pages, letters A & B of a dictionary and submitted to google a text site map. I'm using google's search with advertisement as the planned mechanism to go through my site. Google's webmaster accepted the site mapps as good but then did not index. My index page has been indexed by google and at this point have not linked to any pages. So to get google's search to work I need to get all my content indexed. It appears google will not just index from the site map and so I was thinking of adding pages that spider in links from the main index page. But I don't want to create a bunch of pages that programicly link all of the pages without knowing if this has a chance to work. Eventually I plan on having about 150,000 pages each page being a word or phrase being defined. I wrote a program that is pulling this from a dictionary database. I would like to prove the content that I have to anyone interested to show the value of the dictionary in releation to the dictionary software that I'm completing. Suggestions for getting the entire site indexed by google so I can appear in the search results? Thanks

    Read the article

  • PHP & MySQL Pagination Update Help

    - by TaG
    Its been a while since I updated my pagination on my web page and I'm trying to add First and Last Links to my pagination as well as the ... when the search results are to long. For example I'm trying to achieve the following in the example below. Can some one help me fix my code so I can update my site. Thanks Previous First 1 2 3 4 5 6 7 ... 199 200 Last Next I currently have the following displayed using my code. Previous 1 2 3 4 5 6 7 Next Here is the part of my pagination code that displays the links. if ($pages > 1) { echo '<br /><p>'; $current_page = ($start/$display) + 1; if ($current_page != 1) { echo '<a href="index.php?s=' . ($start - $display) . '&p=' . $pages . '">Previous</a> '; } for ($i = 1; $i <= $pages; $i++) { if ($i != $current_page) { echo '<a href="index.php?s=' . (($display * ($i - 1))) . '&p=' . $pages . '">' . $i . '</a> '; } else { echo '<span>' . $i . '</span> '; } } if ($current_page != $pages) { echo '<a href="index.php?s=' . ($start + $display) . '&p=' . $pages . '">Next</a>'; } echo '</p>'; }

    Read the article

  • Printing an Image embedded into an asp.net page. C#

    - by user1683846
    I am looking to print using C#, micorsoft visual studio, asp.net. The goal:- I am trying to print all pages in a fax document. Format:- The fax document is received as a single .tiff file with multiple pages. What needs to be done:- I need to iterate each page in the .tiff image and print them all. // This function will iterate all pages, one at a time. // protected void PrintAll_Click(object sender, EventArgs e) { // counts number of pages in the fax document.// int number = _FaxPages.Count; // for loop to iterate through each page. // for (int i = 0; i < number; i++) { string _FaxId = Page.Request["FaxId"]; _PageIndex = i; imgFax.ImageUrl = "ShowFax.ashx?n=" + _FaxId + "&f=" + _PageIndex + "&mw=750"; PrintAll.Attributes.Add("onclick", "return printing()"); } } and In my Java script I have: function printing() { window.print(); } Now the issues are: <1 The function only prints one page (the very last page of the tiff file)....and it prints the entire window and not just the embedded .tiff image in the entire web browser window. Any Ideas of how to go about just printing the .tiff embedded image instead of the entire web browser window. <2 Although my for loop iterates through the pages. The print function is only called after the loop has iterated through all the pages and stopped on the final page (say page 6/6). And then it only prints that last page (along with the rest of the contents on the browser window). This is not what I want to happen... I dont want to print all the excess material on the browser window such as buttons, links etc... I just want the embedded .tiff image from the browser window.

    Read the article

  • Single-entry implementation gone wrong

    - by user745434
    I'm doing my first single-entry site and based on the result, I can't see the benefit. I've implemented the following: .htaccess redirects all requests to index.php at the root Url is parsed and each /segment/ is stored as an element in an array First segment indicates which folder to include (e.g. "users" » "/pages/users/index.php"). index.php file of each folder parses the remaining elements in the segments array until array is empty. content.php file of each folder is included if there are no more elements in the segments array, indicating that the destination file is reached Sample File structure ( folders in [] ): [root] index.php [pages] [users] index.php content.php [profile] index.php content.php [edit] index.php content.php [other-page] index.php content.php Request: http://mysite.com/users/profile/ .htaccess redirects request to http://mysite.com/index.php URL is parsed and segments array contains: [1] users, [2] profile index.php maps [1] to "pages/users/index.php", so includes that file pages/users/index.php maps [2] to pages/users/profile/index.php, so includes that file Since no other elements in the segments array, the contents.php file in the current folder (pages/users/profile) is included. I'm not really seeing the benefit of doing this over having functions that include components of the site (e.g. include_header(), include_footer(), etc.), so I conclude that I'm doing something terribly wrong. I'm just not sure what it is.

    Read the article

  • If either one of both equals

    - by user1620028
    I have start and end dates which are stored in a database in this format: start date= 20121004 //4th October 2012 end date= 20121004 //16th November 2012 so I can use date format: $date = date("Ymd"); // returns: 20121004 to determine when to display and not display to repopulate my update input boxes I use: $start=(str_split($stdate,4));// START DATE: splits stored date into 2x4 ie: 20121209 = 2012 1209 $syr = $start[0];// re first half ie: 2012 which is the year $start2 = $start[1];//re second half ie: 1209 $start3=(str_split($start2,2));// splits second half date into 2x2 ie: 1209 = 12 09 $smth = $start3[0]; // first half = month ie: 12 $sday = $start3[1]; // second half = day ie: 09 $expiry=(str_split($exdate,4)); ///SAME AGAIN FOR EXPIRY DATE ... $xyr = $expiry[0]; $expiry2 = $expiry[1]; $expiry3=(str_split($expiry2,2)); $xmth = $expiry3[0]; $xday = $expiry3[1]; which works fine but I need to repopulate the input boxes for the month showing the date in the database like this <option value="01">January</option`> using if ($smth==01):$month='January'; endif; if ($xmth==01):$month='January'; endif; // if the start and/or expiry month number = 01 display $month as January if ($smth==02):$smonth='February'; endif; if ($xmth==02):$smonth='February'; endif; if ($smth==03):$month='March'; endif; <select name="stmonth" class="input"> <option value="<?=$smth?>"><?=$month?></option> ... </select> is there an easier way to display IF EITHER ONE EQUALS rather than having to write the same line twice once for each $smth AND $xmth ? re: if ($smth **and or** $xmth ==01):$month='January'; endif;

    Read the article

  • c++ problem, maybe with types...

    - by Infinity
    Hi guys! I have a little problem in my code. The variables don't want to change their values. Can you say why? Here is my code: vector<coordinate> rocks(N); double angle; double x, y; // other code while (x > 1.0 || x < -1.0 || y > 1.0 || y < -1.0) { angle = rand() * 2.0 * M_PI; cout << angle << endl; cout << rocks[i - 1].x << endl; cout << rocks[i - 1].y << endl; x = rocks[i-1].x + r0 * cos(angle); y = rocks[i-1].y + r0 * sin(angle); cout << x << endl; cout << y << endl << endl; } // other code And the result on the console is: 6.65627e+09 0.99347 0.984713 1.09347 0.984713 1.16964e+09 0.99347 0.984713 1.09347 0.984713 As you see the values of x, y variables doesn't change and this while be an infinity loop. What's the problem? What do you think?

    Read the article

  • The Sitemap Paradox

    - by Jeff Atwood
    We use a sitemap on Stack Overflow, but I have mixed feelings about it. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site. Based on our two years' experience with sitemaps, there's something fundamentally paradoxical about the sitemap: Sitemaps are intended for sites that are hard to crawl properly. If Google can't successfully crawl your site to find a link, but is able to find it in the sitemap it gives the sitemap link no weight and will not index it! That's the sitemap paradox -- if your site isn't being properly crawled (for whatever reason), using a sitemap will not help you! Google goes out of their way to make no sitemap guarantees: "We cannot make any predictions or guarantees about when or if your URLs will be crawled or added to our index" citation "We don't guarantee that we'll crawl or index all of your URLs. For example, we won't crawl or index image URLs contained in your Sitemap." citation "submitting a Sitemap doesn't guarantee that all pages of your site will be crawled or included in our search results" citation Given that links found in sitemaps are merely recommendations, whereas links found on your own website proper are considered canonical ... it seems the only logical thing to do is avoid having a sitemap and make damn sure that Google and any other search engine can properly spider your site using the plain old standard web pages everyone else sees. By the time you have done that, and are getting spidered nice and thoroughly so Google can see that your own site links to these pages, and would be willing to crawl the links -- uh, why do we need a sitemap, again? The sitemap can be actively harmful, because it distracts you from ensuring that search engine spiders are able to successfully crawl your whole site. "Oh, it doesn't matter if the crawler can see it, we'll just slap those links in the sitemap!" Reality is quite the opposite in our experience. That seems more than a little ironic considering sitemaps were intended for sites that have a very deep collection of links or complex UI that may be hard to spider. In our experience, the sitemap does not help, because if Google can't find the link on your site proper, it won't index it from the sitemap anyway. We've seen this proven time and time again with Stack Overflow questions. Am I wrong? Do sitemaps make sense, and we're somehow just using them incorrectly?

    Read the article

  • Getting Internet Explorer to Open Different Sets of Tabs Based on the Day of the Week

    - by Akemi Iwaya
    If you have to use Internet Explorer for work and need to open a different set of work-specific tabs every day, is there a quick and easy way to do it instead of opening each one individually? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader bobSmith1432 is looking for a quick and easy way to open different daily sets of tabs in Internet Explorer for his work: When I open Internet Explorer on different days of the week, I want different tabs to be opened automatically. I have to run different reports for work each day of the week and it takes a lot of time to open the 5-10 tabs I use to run the reports. It would be a lot faster if, when I open Internet Explorer, the tabs I needed would automatically load and be ready. Is there a way to open 5-10 different tabs in Internet Explorer depending on the day of the week? Example: Monday – 6 Accounting Pages Tuesday – 7 Billing Pages Wednesday – 5 HR Pages Thursday – 10 Schedule Pages Friday – 8 Work Summary/Order Pages Is there an easier way for Bob to get all those tabs to load and be ready to go each day instead of opening them individually every time? The Answer SuperUser contributor Julian Knight has a simple, non-script solution for us: Rather than trying the brute force method, how about a work around? Open up each set of tabs either in different windows, or one set at a time, and save all tabs to bookmark folders. Put the folders on the bookmark toolbar for ease of access. Each day, right-click on the appropriate folder and click on ‘Open in tab group’ to open all the tabs. You could put all the day folders into a top-level folder to save space if you want, but at the expense of an extra click to get to them. If you really must go further, you need to write a program or script to drive Internet Explorer. The easiest way is probably writing a PowerShell script. Special Note: There are various scripts shared on the discussion page as well, so the solution shown above is just one possibility out of many. If you love the idea of using scripts for a function like this, then make sure to browse on over to the discussion page to see the various ones SuperUser members have shared! Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Breadcrumb using and schema.org rich snippets

    - by Adam Jenkin
    I am having problems implementing the breadcrumb rich snippets from schema.org. When I construct my breadcrumb using the documentation and run via Google Rich Snippet testing tool, the breadcrumb is identified but not shown in the preview. <!DOCTYPE html> <html> <head> <title>My Test Page</title> </head> <body itemscope itemtype="http://schema.org/WebPage"> <strong>You are here: </strong> <div itemprop="breadcrumb"> <a title="Home" href="/">Home</a> > <a title="Test Pages" href="/Test-Pages/">Test Pages</a> > </div> </body> </html> If I change to use the snippets from data-vocabulary.org, the rich snippets show correctly in the preview. <!DOCTYPE html> <html> <head> <title>My Test Page</title> </head> <body> <strong>You are here: </strong> <ol itemprop="breadcrumb"> <li itemscope itemtype="http://data-vocabulary.org/Breadcrumb"> <a href="/" itemprop="url"> <span itemprop="title">Home</span> </a> </li> <li itemscope itemtype="http://data-vocabulary.org/Breadcrumb"> <a href="/Test-Pages/" itemprop="url"> <span itemprop="title">Test Pages</span> </a> </li> </ol> </body> </html> I want the breadcrumb to be shown in the search result rather than the url to the page. Given that schema.org is the recommended way to be using rich snippets, I would rather use this, however as the breadcrumb is not showing in the preview of the search result using this method, i'm not convinced this is working correctly. Am I doing something wrong in the markup for schema.org example?

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • Website File and Folder Structure

    - by Drummss
    I am having a problem learning how proper website structure should be. And by that I mean how to code the pages and how folder structure should be. Currently I am navigating around my website using GET variables in PHP when you want go to another page, so I am always loading the index.php file. And I would load the page I wanted like so: $page = "error"; if(isset($_GET["page"]) AND file_exists("pages/".$_GET["page"].".php")) { $page = $_GET["page"]; } elseif(!isset($_GET["page"])) { $page = "home"; } And this: <div id="page"> <?php include("pages/".$page.".php"); ?> </div> The reason I am doing this is because it makes making new pages a lot easier as I don't have to link style sheets and javascript in every file like so: <head> <title> Website Name </title> <link href='http://fonts.googleapis.com/css?family=Lato:300,400' rel='stylesheet' type='text/css'> <link rel="stylesheet" href="css/style.css" type="text/css"> <link rel="shortcut icon" href="favicon.png"/> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js" type="text/javascript"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.10.4/jquery-ui.min.js" type="text/javascript"></script> </head> There is a lot of problems doing it this way as URLs don't look normal ("?page=apanel/app") if I am trying to access a page from inside a folder inside the pages folder to prevent clutter. Obviously this is not the only way to go about it and I can't find the proper way to do it because I'm sure websites don't link style sheets in every file as that would be very bad if you changed a file name or needed to add another file you would have to change it in every file. If anyone could tell me how it is actually done or point me towards a tutorial that would be great. Thanks for your time :)

    Read the article

  • Help organising controllers logically

    - by kenny99
    Hi guys, I'm working on a site which i'm developing using an MVC structure. My models will represent all of data in the site, but i'm struggling a bit to decide on a good controller structure. The site will allow users to login/register and see personal data on a number of pages, but also still have access to public pages, e.g FAQs, Contact page etc. This is what I have at the moment... A Template Controller which handles main template display. The basic template for the site will remain the same whether or not you are logged in. A main Website Controller which extends the Template Controller and handles basic authentication. If the user is logged in, a User::control_panel() method is called from the constructor and this builds the control panel which will be present throughout the authenticated session. If user is not logged in, then a different view is loaded instead of the control panel, e.g with a login form. All protected/public page related controllers will extend the website controller. The user homepage has a number of widgets I want to display, which I'm doing via a Home Controller which extends the Website Controller. This controller generates these widgets via the following static calls: $this->template->content->featured_pet = Pet::featured(); $this->template->content->popular_names = Pet::most_popular(); $this->template->content->owner_map = User::generate_map(); $this->template->content->news = News::snippet(); I suppose the first thing I'm unsure about is if the above static calls to controllers (e.g Pet and User) are ok to remain static - these static methods will return views which are loaded into the main template. This is the way I've done things in the past but I'm curious to find out if this is a sensible approach. Other protected pages for signed in users will be similar to the Home Controller. Static pages will be handled by a Page Controller which will also extend the Website Controller, so that it will know whether or not the user control panel or login form should be shown on the left hand side of the template. The protected member only pages will not be routed to the Page Controller, this controller will only handle publicly available pages. One problem I have at the moment, is that if both public and protected pages extend the Website Controller, how do I avoid an infinite loop - for example, the idea is that the website controller should handle authentication then redirect to the requested controller (URL), but this will cause an infinite redirect loop, so i need to come up with a better way of dealing with this. All in all, does this setup make any sense?! Grateful for any feedback.

    Read the article

  • Multiple views and source list in a Core Data app

    - by Ellie P.
    I'm working on my first major Cocoa app for an undergraduate research project. The application is document-based and uses Core Data. One of the entities is an abstract entity, Page. Page is parent of several types of pages: ie PageWithHeaderAndFooter, PageWithTwoColumns, BasicPage etc. Page has attributes, such as title and author, that all pages have in common. Each specific type of page has a certain number of layout blocks (PageWithHeaderAndFooter has three: header, footer, body. BasicPage has one: body. etc.) Additionally, all Page subclasses define layout-specific implementations of certain methods. The other relevant entity is Style, which defines the visual look of a Page. (Think of Pages as HTML and Style as CSS.) I would like my app to have an iTunes/Mail-like source list with sections. (One section would be Pages, the other would be Styles.) I have a pretty good idea how to do the sectioned source list (this was a great help). However, after hours of headbanging and fruitless googling, here's what I can't figure out: Pages and Styles listed in the source list, and when you select one of them, all of the relevant fields for that object appear at the right (mostly NSTextViews, pop up menus, etc). I laid that out and did all of the bindings in Interface Builder. The problem is, if my source list contains different types of pages, how do I get a different view to display at the right depending on the type of page selected? For example, if a BasicPage is selected, I want just what you see above: the general page stuff and one NSTextView that corresponds to the one field body of BasicPage. But if I select a PageWithHeaderAndFooter, I want to display the general page stuff plus three NSTextViews (one for header, body, and footer.) If I have a Style selected, I want to display various pop up menus, color wells, etc. For the pages at least, we're only talking about one or more NSTextViews, each of which corresponds to a String attribute of the respective entity. How would you do this? Thank you for your help!

    Read the article

  • How can this PHP/FQL code be modified to increase the performance and usability?

    - by Kaoukkos
    I try to get some insights from the pages I am administrator on Facebook. What my code does, it gets the IDs of the pages I want to work with through mySQL. I did not include that part though.After this, I get the page_id, name and fan_count of each of those facebook IDs and are saved in fancounts[]. Using the IDs ( pages[] ) I get two messages max from each page. There may be no messages, there may be 1 or 2 messages max. Possibly I will increase it later. messages[] holds the messages of each page. I have two problems with it. It has a very slow performance I can't find a way to echo the data like this: ID - Name of the page - Fan Count Here goes the first message Here goes the second one //here is a break ID - Name of the page 2 - Fan Count Here goes the first message of page 2 Here goes the second one of page 2 My questions are, how can the code be modified to increase performance and show the data as above? I read about fql.multiquery. Can it be used here? Please provide me with code examples. Thank you $pages = array(); // I get the IDs I want to work with $pagesIds = implode(',', $pages); // fancounts[] holds the page_id, name and fan_count of the Ids I work with $fancounts = array(); $pagesFanCounts = $facebook->api("/fql", array( "q" => "SELECT page_id, name, fan_count FROM page WHERE page_id IN ({$pagesIds})" )); foreach ($pagesFanCounts['data'] as $page){ $fancounts[] = $page['page_id']."-".$page['name']."-".$page['fan_count']; } //messages[] holds from 0 to 2 messages from each of the above pages $messages = array(); foreach( $pages as $id) { $getMessages = $facebook->api("/fql", array( "q" => "SELECT message FROM stream WHERE source_id = '$id' LIMIT 2" )); $messages[] = $getMessages['data']; } // this is how I print them now but it does not give me the best output. ( thanks goes to Mark for providing me this code ) $count = min(count($fancounts),count($messages)); for($i=0; $i<$count; ++$i) { echo $fancounts[$i],'<br>'; foreach($messages[$i] as $msg) { echo $msg['message'],'<br>'; } }

    Read the article

  • Upload Certificate and Key to RUEI in order to decrypt SSL traffic

    - by stefan.thieme(at)oracle.com
    So you want to monitor encrypted traffic with your RUEI collector ?Actually this is an easy thing if you follow the lines below...I will start out with creating a pair of snakeoil (so called self-signed) certificate and key with the make-ssl-cert tool which comes pre-packaged with apache only for the purpose of this example.$ sudo make-ssl-cert generate-default-snakeoil$ sudo ls -l /etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key-rw-r--r-- 1 root root     615 2010-06-07 10:03 /etc/ssl/certs/ssl-cert-snakeoil.pem-rw-r----- 1 root ssl-cert 891 2010-06-07 10:03 /etc/ssl/private/ssl-cert-snakeoil.keyRUEI Configuration of Security SSL Keys You will most likely get these two files from your Certificate Authority (CA) and/or your system administrators should be able to extract this from your WebServer or LoadBalancer handling SSL encryption for your infrastructure.Now let's look at the content of these two files, the certificate (apache assumes this is in PEM format) is called a public key and the private key is used by the apache server to encrypt traffic for a client using the certificate to initiate the SSL connection with the server.In case you already know that these two match, you simply have to paste them in one text file and upload this text file to your RUEI instance.$ sudo cat /etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key > /tmp/ruei.cert_and_key$ sudo cat /tmp/ruei.cert_and_key -----BEGIN CERTIFICATE----- MIIBmTCCAQICCQD7O3XXwVilWzANBgkqhkiG9w0BAQUFADARMQ8wDQYDVQQDEwZ1 YnVudHUwHhcNMTAwNjA3MDgwMzUzWhcNMjAwNjA0MDgwMzUzWjARMQ8wDQYDVQQD EwZ1YnVudHUwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBALbs+JnI+p+K7Iqa SQZdnYBxOpdRH0/9jt1QKvmH68v81h9+f1Z2rVR7Zrd/l+ruE3H9VvuzxMlKuMH7 qBX/gmjDZTlj9WJM+zc0tSk+e2udy9he20lGzTxv0vaykJkuKcvSWNk4WE9NuAdg IHZvjKgoTSVmvM1ApMCg69nyOy97AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAk2rv VEkxR1qPSpJiudDuGUHtWKBKWiWbmSwI3REZT+0vG+YDG5a55NdxgRk3zhQntqF7 gNYjKxblBByBpY7W0ci00kf7kFgvXWMeU96NSQJdnid/YxzQYn0dGL2rSh1dwdPN NPQlNSfnEQ1yxFevR7aRdCqTbTXU3mxi8YaSscE= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXgIBAAKBgQC27PiZyPqfiuyKmkkGXZ2AcTqXUR9P/Y7dUCr5h+vL/NYffn9W dq1Ue2a3f5fq7hNx/Vb7s8TJSrjB+6gV/4Jow2U5Y/ViTPs3NLUpPntrncvYXttJ Rs08b9L2spCZLinL0ljZOFhPTbgHYCB2b4yoKE0lZrzNQKTAoOvZ8jsvewIDAQAB AoGBAJ7LCWeeUwnKNFqBYmD3RTFpmX4furnal3lBDX0945BZtJr0WZ/6N679zIYA aiVTdGfgjvDC9lHy3n3uctRd0Jqdh2QoSSxNBhq5elIApNIIYzu7w/XI/VhGcDlA b6uadURQEC2q+M8YYjw3mwR2omhCWlHIViOHe/9T8jfP/8pxAkEA7k39WRcQildH DFKcj7gurqlkElHysacMTFWf0ZDTEUS6bdkmNXwK6mH63BlmGLrYAP5AMgKgeDf8 D+WRfv8YKQJBAMSCQ7UGDN3ysyfIIrdc1RBEAk4BOrKHKtD5Ux0z5lcQkaCYrK8J DuSldreN2yOhS99/S4CRWmGkTj04wRSnjwMCQQCaR5mW3QzTU4/m1XEQxsBKSdZE 2hMSmsCmhuSyK13Kl0FPLr/C7qyuc4KSjksABa8kbXaoKfUz/6LLs+ePXZ2JAkAv +mIPk5+WnQgS4XFgdYDrzL8HTpOHPSs+BHG/goltnnT/0ebvgXWqa5+1pyPm6h29 PrYveM2pY1Va6z1xDowDAkEAttfzAwAHz+FUhWQCmOBpvBuW/KhYWKZTMpvxFMSY YD5PH6NNyLfBx0J4nGPN5n/f6il0s9pzt3ko++/eUtWSnQ== -----END RSA PRIVATE KEY----- Simply click on the add new key and browse for the cert_and_key file on your desktop which you concatenated earlier using any text editor. You may need to add a passphrase in order to decrypt the RSA key in some cases (it should tell you BEGIN ENCRYPTED PRIVATE KEY in the header line). I will show you the success screen after uploading the certificate to RUEI. You may want to restart your collector once you have uploaded all the certificate/key pairs you want to use in order to make sure they get picked up asap.You should be able to see the number of SSL Connections rising in the Collector statistics screen below. The figures for decrypt errors should slowly go down and the usage figures for your encryption algortihm on the subsequent SSL Encryption screen should go up. You should be 100% sure everything works fine by now, otherwise see below to distinguish the remaining 1% from your 99% certainty.Verify Certificate and Key are matchingYou can compare the modulus of private key and public certificate and they should match in order for the key to fit the lock. You only want to make sure they both fit each other.We are actually interested only in the following details of the two files, which can be determined by using the -subject, -dates and -modulus command line switches instead of the complete -text output of the x509 certificate/rsa key contents.$ sudo openssl x509 -noout -subject -in /etc/ssl/certs/ssl-cert-snakeoil.pemsubject= /CN=ubuntu$ sudo openssl x509 -noout -dates -in /etc/ssl/certs/ssl-cert-snakeoil.pemnotBefore=Jun  7 08:03:53 2010 GMTnotAfter=Jun  4 08:03:53 2020 GMT$ sudo openssl x509 -noout -modulus -in /etc/ssl/certs/ssl-cert-snakeoil.pem Modulus=B6ECF899C8FA9F8AEC8A9A49065D9D80713A97511F4FFD8EDD502AF987EBCBFCD61F7E7F5676AD547B66B77F97EAEE1371FD56FBB3C4C94AB8C1FBA815FF8268C3653963F5624CFB3734B5293E7B6B9DCBD85EDB4946CD3C6FD2F6B290992E29CBD258D938584F4DB8076020766F8CA8284D2566BCCD40A4C0A0EBD9F23B2F7B $ sudo openssl rsa -noout -modulus -in /etc/ssl/private/ssl-cert-snakeoil.keyModulus=B6ECF899C8FA9F8AEC8A9A49065D9D80713A97511F4FFD8EDD502AF987EBCBFCD61F7E7F5676AD547B66B77F97EAEE1371FD56FBB3C4C94AB8C1FBA815FF8268C3653963F5624CFB3734B5293E7B6B9DCBD85EDB4946CD3C6FD2F6B290992E29CBD258D938584F4DB8076020766F8CA8284D2566BCCD40A4C0A0EBD9F23B2F7BAs you can see the modulus matches exactly and we have the proof that the certificate has been created using the private key. OpenSSL Certificate and Key DetailsAs I already told you, you do not need all the greedy details, but in case you want to know it in depth what is actually in those hex-blocks can be made visible with the following commands which show you the actual content in a human readable format.Note: You may not want to post all the details of your private key =^) I told you I have been using a self-signed certificate only for showing you these details.$ sudo openssl rsa -noout -text -in /etc/ssl/private/ssl-cert-snakeoil.keyPrivate-Key: (1024 bit)modulus:    00:b6:ec:f8:99:c8:fa:9f:8a:ec:8a:9a:49:06:5d:    9d:80:71:3a:97:51:1f:4f:fd:8e:dd:50:2a:f9:87:    eb:cb:fc:d6:1f:7e:7f:56:76:ad:54:7b:66:b7:7f:    97:ea:ee:13:71:fd:56:fb:b3:c4:c9:4a:b8:c1:fb:    a8:15:ff:82:68:c3:65:39:63:f5:62:4c:fb:37:34:    b5:29:3e:7b:6b:9d:cb:d8:5e:db:49:46:cd:3c:6f:    d2:f6:b2:90:99:2e:29:cb:d2:58:d9:38:58:4f:4d:    b8:07:60:20:76:6f:8c:a8:28:4d:25:66:bc:cd:40:    a4:c0:a0:eb:d9:f2:3b:2f:7bpublicExponent: 65537 (0x10001)privateExponent:    00:9e:cb:09:67:9e:53:09:ca:34:5a:81:62:60:f7:    45:31:69:99:7e:1f:ba:b9:da:97:79:41:0d:7d:3d:    e3:90:59:b4:9a:f4:59:9f:fa:37:ae:fd:cc:86:00:    6a:25:53:74:67:e0:8e:f0:c2:f6:51:f2:de:7d:ee:    72:d4:5d:d0:9a:9d:87:64:28:49:2c:4d:06:1a:b9:    7a:52:00:a4:d2:08:63:3b:bb:c3:f5:c8:fd:58:46:    70:39:40:6f:ab:9a:75:44:50:10:2d:aa:f8:cf:18:    62:3c:37:9b:04:76:a2:68:42:5a:51:c8:56:23:87:    7b:ff:53:f2:37:cf:ff:ca:71prime1:    00:ee:4d:fd:59:17:10:8a:57:47:0c:52:9c:8f:b8:    2e:ae:a9:64:12:51:f2:b1:a7:0c:4c:55:9f:d1:90:    d3:11:44:ba:6d:d9:26:35:7c:0a:ea:61:fa:dc:19:    66:18:ba:d8:00:fe:40:32:02:a0:78:37:fc:0f:e5:    91:7e:ff:18:29prime2:    00:c4:82:43:b5:06:0c:dd:f2:b3:27:c8:22:b7:5c:    d5:10:44:02:4e:01:3a:b2:87:2a:d0:f9:53:1d:33:    e6:57:10:91:a0:98:ac:af:09:0e:e4:a5:76:b7:8d:    db:23:a1:4b:df:7f:4b:80:91:5a:61:a4:4e:3d:38:    c1:14:a7:8f:03exponent1:    00:9a:47:99:96:dd:0c:d3:53:8f:e6:d5:71:10:c6:    c0:4a:49:d6:44:da:13:12:9a:c0:a6:86:e4:b2:2b:    5d:ca:97:41:4f:2e:bf:c2:ee:ac:ae:73:82:92:8e:    4b:00:05:af:24:6d:76:a8:29:f5:33:ff:a2:cb:b3:    e7:8f:5d:9d:89exponent2:    2f:fa:62:0f:93:9f:96:9d:08:12:e1:71:60:75:80:    eb:cc:bf:07:4e:93:87:3d:2b:3e:04:71:bf:82:89:    6d:9e:74:ff:d1:e6:ef:81:75:aa:6b:9f:b5:a7:23:    e6:ea:1d:bd:3e:b6:2f:78:cd:a9:63:55:5a:eb:3d:    71:0e:8c:03coefficient:    00:b6:d7:f3:03:00:07:cf:e1:54:85:64:02:98:e0:    69:bc:1b:96:fc:a8:58:58:a6:53:32:9b:f1:14:c4:    98:60:3e:4f:1f:a3:4d:c8:b7:c1:c7:42:78:9c:63:    cd:e6:7f:df:ea:29:74:b3:da:73:b7:79:28:fb:ef:    de:52:d5:92:9d$ sudo openssl x509 -noout -text -in /etc/ssl/certs/ssl-cert-snakeoil.pemCertificate:    Data:        Version: 1 (0x0)        Serial Number:            fb:3b:75:d7:c1:58:a5:5b        Signature Algorithm: sha1WithRSAEncryption        Issuer: CN=ubuntu        Validity            Not Before: Jun  7 08:03:53 2010 GMT            Not After : Jun  4 08:03:53 2020 GMT        Subject: CN=ubuntu        Subject Public Key Info:            Public Key Algorithm: rsaEncryption            RSA Public Key: (1024 bit)                Modulus (1024 bit):                    00:b6:ec:f8:99:c8:fa:9f:8a:ec:8a:9a:49:06:5d:                    9d:80:71:3a:97:51:1f:4f:fd:8e:dd:50:2a:f9:87:                    eb:cb:fc:d6:1f:7e:7f:56:76:ad:54:7b:66:b7:7f:                    97:ea:ee:13:71:fd:56:fb:b3:c4:c9:4a:b8:c1:fb:                    a8:15:ff:82:68:c3:65:39:63:f5:62:4c:fb:37:34:                    b5:29:3e:7b:6b:9d:cb:d8:5e:db:49:46:cd:3c:6f:                    d2:f6:b2:90:99:2e:29:cb:d2:58:d9:38:58:4f:4d:                    b8:07:60:20:76:6f:8c:a8:28:4d:25:66:bc:cd:40:                    a4:c0:a0:eb:d9:f2:3b:2f:7b                Exponent: 65537 (0x10001)    Signature Algorithm: sha1WithRSAEncryption        93:6a:ef:54:49:31:47:5a:8f:4a:92:62:b9:d0:ee:19:41:ed:        58:a0:4a:5a:25:9b:99:2c:08:dd:11:19:4f:ed:2f:1b:e6:03:        1b:96:b9:e4:d7:71:81:19:37:ce:14:27:b6:a1:7b:80:d6:23:        2b:16:e5:04:1c:81:a5:8e:d6:d1:c8:b4:d2:47:fb:90:58:2f:        5d:63:1e:53:de:8d:49:02:5d:9e:27:7f:63:1c:d0:62:7d:1d:        18:bd:ab:4a:1d:5d:c1:d3:cd:34:f4:25:35:27:e7:11:0d:72:        c4:57:af:47:b6:91:74:2a:93:6d:35:d4:de:6c:62:f1:86:92:        b1:c1The above output can also be seen if you direct your browser client to your website and check the certificate sent by the server to your browser. You will be able to lookup all the details including the validity dates, subject common name and the public key modulus.Capture an SSL connection using WiresharkAnd as you would have expected, looking at the low-level tcp data that has been exchanged between the client and server with a tcp-diagnostics tool (i.e. wireshark/tcpdump) you can also see the modulus in there.These were the settings I used to capture all traffic on the local loopback interface, matching the filter expression: tcp and ip and host 127.0.0.1 and port 443. This tells Wireshark to leave out any other information, I may not have been interested in showing you.

    Read the article

  • WatchGuard 'Internal Policy' intermittently blocking outbound web traffic

    - by vfilby
    I have a lot of legitimate outbound traffic intermittently being denied by WatchGuard's "Internal Policy." Today I tried to go to Splunk's homepage and my traffic was denied by my watchguard XTM 22 with Pro upgrade. What is the "Internal Policy" and what can I do to control it? Example of Traffic being blocked Type Date Action Source IP Port Interface Destination IP Port Policy Traffic 2011-09-21T18:24:43 Deny 10.0.0.90 49627 3-Primary LAN 64.127.105.40 80 Firebox Internal Policy http/tcp Top three firewall policies:

    Read the article

  • SSL connection errors from Apache

    - by Yang
    I'm running a (self-signed) SSL cert site on Apache/2.2.14 on Ubuntu 10.04, but various browsers are giving errors on half the connection attempts. Just now saw this transient error from Chrome: "Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error." Hit refresh and the problem goes away for a while. wget too: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:26-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. OpenSSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 OpenSSL: error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed OpenSSL: error:1408D07B:SSL routines:SSL3_GET_KEY_EXCHANGE:bad signature Unable to establish SSL connection. Run it right away again and it works: $ wget --no-check-certificate https://dev.foo.com/deps/ --2010-09-08 19:30:29-- https://dev.foo.com/deps/ Resolving dev.foo.com... 184.72.53.220 Connecting to dev.foo.com|184.72.53.220|:443... connected. WARNING: cannot verify dev.foo.com's certificate, issued by `/CN=dev.foo.com': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 3157 (3.1K) [text/html] Saving to: `index.html' 100%[======================================>] 3,157 --.-K/s in 0s 2010-09-08 19:30:29 (48.6 MB/s) - `index.html' saved [3157/3157] In my sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key The cert: -----BEGIN CERTIFICATE----- MIIBszCCARwCCQCa0TzNwqLgsTANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNk ZXYucGFydHlvbmRhdGEuY29tMB4XDTEwMDgyNzA2MzA1N1oXDTIwMDgyNDA2MzA1 N1owHjEcMBoGA1UEAxMTZGV2LnBhcnR5b25kYXRhLmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAzXDEULpCUqIc9hV/ESFapkckR2uoYINA81DvG2aQZ9Ot Q30OwX2ae2CC4bSzJEIVlahU8vjVrWpmpa28NEhQbqh4ywwbl1XDrEVYI6Gkfimf snJhOKyaVrEhlwutYtBjmsz3ZIqwymMPm/6smVcSS5dJIynlSmtltxX6ivPcO8UC AwEAATANBgkqhkiG9w0BAQUFAAOBgQBGxHVkpSSOnZjzuySRepjhAlV/yhe9Fx23 fh12WrjQMEi98B7JEuNSLXDWckUN7O6XRc3RzKmazcGHJqzhn0Ov6gAmAE2XjZ/x VW21xmaLwk+KgYKFJbJJaP3jMSpU7I3aa11wqAkR2Zd4Nkm9N0YXYIzcBdfztTVI Et8mEHBFdg== -----END CERTIFICATE----- The cert is in turn generated via: $ make-ssl-cert generate-default-snakeoil --force-overwrite Apache version. $ apache2 -V Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" I don't administer the network, hardware, etc. - this is all running on Amazon EC2. I'm not running a load-balancer or anything else in front of the server. I'm making direct TCP connections to that host (AFAIK). Any ideas? Thanks in advance for any help.

    Read the article

  • Is there any fundamental difference between piping in mac and linux?

    - by Mohammad Moghimi
    ps -e | grep bash sample output from a linux machine: 1128 pts/14 00:00:00 bash 7491 pts/7 00:00:00 bash 12651 pts/14 00:00:00 bash 16145 pts/2 00:00:00 bash sample output from a mac machine: 58352 ttys000 0:00.09 login -pfl username /bin/bash -c exec -la bash /bin/bash 58353 ttys000 0:00.02 -bash 58390 ttys000 0:00.00 grep bash 20372 ttys005 0:00.06 login -pfl username /bin/bash -c exec -la bash /bin/bash 20373 ttys005 0:00.18 -bash My question is that why we see "grep bash" in the second case but not the first case.

    Read the article

  • How to install ceph on EC2 Amazon Linux AMI

    - by takaomag
    I want to test Ceph (a distributed network storage and file system) on some EC2 hosts which is derived from Amazon Linux AMI (amzn-ami-2011.09.2.x86_64-ebs). The kernel version is 3.2 and btrfs is enabled. But kernel config options related to Ceph (CONFIG_CEPH_FS and CONFIG_BLK_DEV_RBD) seems to be disabled. I have to make a new kernel and register it to amazon ? Or, does someone know more easy way ?

    Read the article

  • SSL connection errors from Apache

    - by Yang
    I'm running a (self-signed) SSL cert site on Apache/2.2.14 on Ubuntu 10.04, but various browsers are giving errors on half the connection attempts. Just now saw this transient error from Chrome: "Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error." Hit refresh and the problem goes away for a while. wget too: $ wget --no-check-certificate https://dev.partyondata.com/deps/ --2010-09-08 19:30:26-- https://dev.partyondata.com/deps/ Resolving dev.partyondata.com... 184.72.53.220 Connecting to dev.partyondata.com|184.72.53.220|:443... connected. OpenSSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 OpenSSL: error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed OpenSSL: error:1408D07B:SSL routines:SSL3_GET_KEY_EXCHANGE:bad signature Unable to establish SSL connection. Run it right away again and it works: $ wget --no-check-certificate https://dev.partyondata.com/deps/ --2010-09-08 19:30:29-- https://dev.partyondata.com/deps/ Resolving dev.partyondata.com... 184.72.53.220 Connecting to dev.partyondata.com|184.72.53.220|:443... connected. WARNING: cannot verify dev.partyondata.com's certificate, issued by `/CN=dev.partyondata.com': Self-signed certificate encountered. HTTP request sent, awaiting response... 200 OK Length: 3157 (3.1K) [text/html] Saving to: `index.html' 100%[======================================>] 3,157 --.-K/s in 0s 2010-09-08 19:30:29 (48.6 MB/s) - `index.html' saved [3157/3157] In my sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key The cert: -----BEGIN CERTIFICATE----- MIIBszCCARwCCQCa0TzNwqLgsTANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNk ZXYucGFydHlvbmRhdGEuY29tMB4XDTEwMDgyNzA2MzA1N1oXDTIwMDgyNDA2MzA1 N1owHjEcMBoGA1UEAxMTZGV2LnBhcnR5b25kYXRhLmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAzXDEULpCUqIc9hV/ESFapkckR2uoYINA81DvG2aQZ9Ot Q30OwX2ae2CC4bSzJEIVlahU8vjVrWpmpa28NEhQbqh4ywwbl1XDrEVYI6Gkfimf snJhOKyaVrEhlwutYtBjmsz3ZIqwymMPm/6smVcSS5dJIynlSmtltxX6ivPcO8UC AwEAATANBgkqhkiG9w0BAQUFAAOBgQBGxHVkpSSOnZjzuySRepjhAlV/yhe9Fx23 fh12WrjQMEi98B7JEuNSLXDWckUN7O6XRc3RzKmazcGHJqzhn0Ov6gAmAE2XjZ/x VW21xmaLwk+KgYKFJbJJaP3jMSpU7I3aa11wqAkR2Zd4Nkm9N0YXYIzcBdfztTVI Et8mEHBFdg== -----END CERTIFICATE----- The cert is in turn generated via: $ make-ssl-cert generate-default-snakeoil --force-overwrite Apache version. $ apache2 -V Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" Any ideas? Thanks in advance for any help.

    Read the article

  • Error connecting to Sonicwall L2TP VPN from iPad/iPhone

    - by db2
    A client has a Sonicwall Pro 2040 running SonicOS 3.0, and they'd like to be able to use the L2TP VPN client from their iPads to connect to internal services (Citrix, etc). I've enabled the L2TP VPN server on the Sonicwall, made sure to set AES-128 for phase 2, and set up the configuration on a test iPad with the appropriate username, password, and pre-shared key. When I attempt to connect, I get some rather cryptic error messages in the log on the Sonicwall: 2 03/29/2011 12:25:09.096 IKE Responder: IPSec proposal does not match (Phase 2) [My outbound IP address redacted] (admin) [WAN IP address redacted] 10.10.130.7/32 -> [WAN IP address redacted]/32 3 03/29/2011 12:25:09.096 IKE Responder: Received Quick Mode Request (Phase 2) [My outbound IP address redacted], 61364 (admin) [WAN IP address redacted], 500 4 03/29/2011 12:25:07.048 IKE Responder: IPSec proposal does not match (Phase 2) [My outbound IP address redacted] (admin) [WAN IP address redacted] 10.10.130.7/32 -> [WAN IP address redacted]/32 5 03/29/2011 12:25:07.048 IKE Responder: Received Quick Mode Request (Phase 2) [My outbound IP address redacted], 61364 (admin) [WAN IP address redacted], 500 The console log on the iPad looks like this: Mar 29 13:31:24 Daves-iPad racoon[519] <Info>: [519] INFO: ISAKMP-SA established 10.10.130.7[500]-[WAN IP address redacted][500] spi:5d705eb6c760d709:458fcdf80ee8acde Mar 29 13:31:24 Daves-iPad racoon[519] <Notice>: IPSec Phase1 established (Initiated by me). Mar 29 13:31:24 Daves-iPad kernel[0] <Debug>: launchd[519] Builtin profile: racoon (sandbox) Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] INFO: initiate new phase 2 negotiation: 10.10.130.7[500]<=>[WAN IP address redacted][500] Mar 29 13:31:25 Daves-iPad racoon[519] <Notice>: IPSec Phase2 started (Initiated by me). Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] ERROR: fatal NO-PROPOSAL-CHOSEN notify messsage, phase1 should be deleted. Mar 29 13:31:25 Daves-iPad racoon[519] <Info>: [519] ERROR: Message: '@ No proposal is chosen'. Mar 29 13:31:46 Daves-iPad racoon[519] <Info>: [519] ERROR: fatal NO-PROPOSAL-CHOSEN notify messsage, phase1 should be deleted. Mar 29 13:31:46 Daves-iPad racoon[519] <Info>: [519] ERROR: Message: '@ No proposal is chosen'. Mar 29 13:31:55 Daves-iPad pppd[518] <Notice>: IPSec connection failed Does this offer any clues as to what's going wrong?

    Read the article

  • Mystery error running .NET 3.5 program on Windows Server 2008 R2

    - by MAW74656
    Why can't I run this program? I get error like: Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: generatecodes.exe Problem Signature 02: 1.0.0.0 Problem Signature 03: 4f0b0ab4 Problem Signature 04: GenerateCodes Problem Signature 05: 1.0.0.0 Problem Signature 06: 4f0b0ab4 Problem Signature 07: 4 Problem Signature 08: 10 Problem Signature 09: System.IO.FileNotFoundException OS Version: 6.1.7600.2.0.0.274.10 Locale ID: 1033

    Read the article

  • What is the Apple Mikey HID Driver for?

    - by Ivan Vucica
    Cheers, does anyone know what component in Macbook identifies itself as "Apple Mikey HID Driver"? Joystick and Gamepad Tester detected my gamepad, the keyboard (with each key as a separate axis/button/whatever) and this mysterious device (with single axis/button identified as 'Page: 0x6, Usage: 0x22' which doesn't update). This is in white Unibody Macbook '09. Remark: While Googling for the component, I stumbled upon this mailing list post mentioning Apple IR?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >