Search Results

Search found 21908 results on 877 pages for 'content catalog'.

Page 109/877 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • How can I avoid floating the same content twice?

    - by Randall Bohn
    My Kynetx app uses float_html() to put up a box full of content. rule float_box { select when pageview ".*" pre { content = << <div id='messagebox'> <h3>Floating Message Box</h3> <ul id='my_list'></ul> </div> >>; } float_html("absolute","top:25px","right:20px",content); } rule fill_box { select when pageview ".*" foreach ["alpha","bravo","charlie"] setting (list_item) append("#my_list", "<li>#{list_item}</li>"); } The app (a421x27) is used from a bookmarklet. If you click the bookmarklet twice on the same page you get double content. Is there any way to detect that the box is already on the screen and reuse it?

    Read the article

  • Good object/DB set-up for CMS-esque app for managing content and user permissions?

    - by sah302
    Hi all, so I am writing a big CMS-esque app to allow users to manage web content through web applications, I've got a pretty good db-driven user permission system going, but am having trouble coming up with a good way to handle content groups and pages, I've got a couple options and not sure which one to take. Furthermore, I am not sure how to handle static page updates that have no 'widgets' in them. My current set-up for permissions is this: Objects: User, UserGroup, UserUserGroup, UserGroupType Standard many to many relationship User -> UserUserGroup <- UserGroup each Usergroup has a UserGroupType, which could be anything from Title, Department, to PermissionGroup. PermissionGroup manages the permissions. Right now on a per page basis I check permissions based on their PermissionsGroups. So for a page which has CMS features for a news widget, I check for permission groups of "Site Admin" and "News Admin". Now the issue I am coming to is, the site has many different departments involved. No problem I think, I can just have a EntityContentGroup so any widget app can be used for any departments. So my HR department, each of their news items would be in the EntityContentGroup with the news item ID, and content group of "HR" or "HR News". But maybe this isn't the most efficient way to go about it? I don't want to put the content group simply as a NewsItemType because some news items could apply to multiple areas, so I want to be able to assign them to as many areas as I want. Likewise, all of my widget apps have this, so that's why I decided to choose EntityContentGroup and not just NewsItemContentGroup. I was also thinking well instead of doing a contentGroup do a Page object that says which page some entity should be on. It seems almost like the same thing, but would I want to use Page for something else? I was thinking Page would be used for static pages with no widgets, a simple Rich Text Editor can edit the content of that page and I save that item to a page?? And then instead of doing a page level check for UserGroup permissions, would it be better to associate a usergroup to a contentgroup, and then just depending on what contentGroup content on the page is displayed, determine the permissions through that relationship? Is that better? I am not sure at this point. I guess I am just getting a tad overwhelmed at this is the largest app in scope and size that I have ever written. What is the best approach for this based on my current user permission set-up?

    Read the article

  • Display image background fully for last repeat in a div

    - by Stiggler
    I have a 700x300 background repeating seamlessly under the main content-div. Now I'd like to attach a div at the bottom of the content-div, containing a continuation-to-end of the background image, connecting seamlessly with the background above it. Due to the nature of the pattern, unless the full 300px height of the background image is visible in the last repeat of the content-div's backround, the background in the div below won't seamlessly connect. Basically, I need the content div's height to be a multiple of 300px under all circumstances. What's a good approach to this sort of problem? I've tried resizing the content-div on loading the page, but this only works as long as the content div doesn't contain any resizing, dynamic content, which is not my case: function adjustContentHeight() { // Setting content div's height to nearest upper multiple of column backgrounds height, // forcing it not to be cut-off when repeated. var contentBgHeight = 300; var contentHeight = $("#content").height(); var adjustedHeight = Math.ceil(contentHeight / contentBgHeight); $("#content").height(adjustedHeight * contentBgHeight); } $(document).ready(adjustContentHeight); What I'm looking for there is a way to respond to a div resizing event, but there doesn't seem to be such a thing. Also, please assume I have no access to the JS controlling the resizing of content in the content-div, though this is potentially a way of solving the problem. Another potential solution I was thinking off was to offset the background image in the bottom div by a certain amount depending on the height of the content-div. Again, the missing piece seems to be the ability to respond to a resize event.

    Read the article

  • Run a script inside a content page. ASP.NET

    - by Roger Filipe
    Hello, I have a masterpage and content page. And I'm trying to run a script that needs to be executed on the page loading. As I am using a master page do not have access to the field My doubt is how to run the script within the content page? And where the script has to be? the head of the master page or inside the content page?

    Read the article

  • How to force a DIV block to extend to the bottom of a page, even if it has no content?

    - by Sir Psycho
    Hi, I'm trying to get the content div to stretch all the way to the bottom of the page but so far, its only stretching if theres actual content to display. The reason I want to do this is so if there isn't much content to display, the vertical border still goes all the way down. Here is my code <body> <form id="form1"> <div id="header"> <a title="Home" href="index.html" /> </div> <div id="menuwrapper"> <div id="menu"> </div> </div> <div id="content"> </div> and my CSS body { font-family: Trebuchet MS, Verdana, MS Sans Serif; font-size:0.9em; margin:0; padding:0; } div#header { width: 100%; height: 100px; } #header a { background-position: 100px 30px; background: transparent url(site-style-images/sitelogo.jpg) no-repeat fixed 100px 30px; height: 80px; display: block; } #header, #menuwrapper { background-repeat: repeat; background-image: url(site-style-images/darkblue_background_color.jpg); } #menu #menuwrapper { height:25px; } div#menuwrapper { width:100% } #menu, #content { width:1024px; margin: 0 auto; } div#menu { height: 25px; background-color:#50657a; } Thanks for taking a looksi

    Read the article

  • Can you use data binding with the Content property of a WPF Frame?

    - by dthrasher
    I can use data binding to set the initial Content of a WPF Frame, but subsequent changes to the the bound property (implemented using INotifyPropertyChange) do not seem to change the content. Also, does anyone know if binding directly to the Content property in this way will cause the bound item to appear in the Frame or NavigationWindow's journal? Some context: I realize that I should probably be using the NavigationService to interact with the Frame, but I'm attempting to follow the MVVM pattern. It seems like it would be much simpler to databind to the Content property...

    Read the article

  • How do you post content to a specific template position?

    - by ?????
    I can't figure this out. I purchased a template / theme from RocketTheme, but I can't figure out how to add content at a specific position. The templates have "module positions" that collapse. I'd like to add some content at one of the module positions. If I add articles, they seem to go into "mainbody". But I'd like to have content in other areas of the template. How do I take some text, images, or other content, and get them to display in these other positions (i.e., TOP-A, or FEATURE-A, etc)?

    Read the article

  • Is GAE Really GZipping My Content? Slow Response Times with GAE as CDN

    - by viatropos
    I am testing out Google App Engine as a free Content Delivery Network and it feels like it's taking a long time to serve up my content. Why does this gae page take a say a half a second to download, while your typical stack overflow page downloads much faster even with a ton more content? What am I missing here? All I have done is create an app and uploaded an image according to that tutorial, but content is being served very slowly it seems. Any suggestions? (Not considering Amazon or other CDNs right now, just looking for help with GAE). Note: I am using Safari when I visit those links, maybe safari is causing problems?

    Read the article

  • Does Google punish content duplication across multiple country domains?

    - by Logan Koester
    I like the way Google handles internationalization, with domains such as google.co.uk, google.nl, google.de etc. I'd like to do this for my own site, but I'm concerned that Google will interpret this as content duplication, particularly across countries that speak the same human language, as there won't be any translation to hint that the content is different. My site is a web application, not a content farm, so is this a legitimate concern? Would I be better off with subdomains of my .com? Directories?

    Read the article

  • ASP.Net: User control with content area, it's clearly possible but I need some details.

    - by bert
    I have seen two suggestions for my original question about whether it is possible to define a content area inside a user control and there are some helpful suggestions i.e. http://stackoverflow.com/questions/1971498/passing-in-content-to-asp-net-user-control and http://stackoverflow.com/questions/1912283/asp-net-user-control-inner-content Now, I like the theory of the latter better than the former just for aesthetic reasons. It seems to make more sense to me but the example given uses two variables content and templateContent that the answerer has not defined in their example code. Without these details I have found that the example does not work. I guess they are properties of the control? Or some such? The former example seems workable but I'd prefer to go with the latter if someone could fill in the blanks for me. Thanks.

    Read the article

  • Given this demo, How do I make HTML content area fit to viewport height?

    - by viatropos
    I just made this demo extracting out what I'm trying to accomplish: Autosize Main Content Area I want the pink/yellow area to act according to these rules: Minimum height is the size of its content (which is variable) IF content size is smaller than viewport size Otherwise minimum height is such that it adjusts to fill the window. Checking out the source to that demo, what am I missing? I feel like this is a pretty easy case that shouldn't require javascript. Any ideas?

    Read the article

  • ICOmmand - canexecute can not disable Button with image content.

    - by Anish
    Hi , I have a button control in my wpf-mvvm application. I use an ICommand property (defined in viewmodel) to bind the button click event to viewmodel. I have - execute and canexecute parameters for my ICommand implementation (RelayCommand). Even if CanExecute is false...button is not disabled...WHEN button CONTENT is IMAGE But, when button content is text..enable/disable works fine. <Button DockPanel.Dock="Top" Command="{Binding Path=MoveUpCommand}"> <Button.Content> <Image Source="/Resources/MoveUpArrow.png"></Image> </Button.Content> <Style> <Style.Triggers> <Trigger Property="IsEnabled" Value="False"> <Setter Property="Opacity" Value=".5" /> </Trigger> </Style.Triggers> </Style> </Button>

    Read the article

  • how do copyright permission systems for content hosting sites work?

    - by zebraman
    I am wondering about subscription sites that host content, like recorded performances from concerts. I'm sure there is a tangle of copyright permissions that must be granted for these video/audio files to be hosted. For example, if a band plays a cover of another band's song, permission must be obtained from not only the band that performed, but the band that owns the song. Perhaps even from the venue that hosted the performance, to record the video and post the content. I am curious how websites that host content like this work. How might an automated copyright system work to keep track of who has ownership of certain performances and obtain permission from said owners to record and post their content.

    Read the article

  • how to display a binary content of image/pdf in java script?

    - by Ka-rocks
    I have a binary content of image/pdf in java script variable downloaded from server. There will be indication server about the typr of the file. I have to display the content in respective file format. If it is image , i have to display the image. If it is a pdf, i have to open the content in pdf format. and so on. How to parse the binary content and display it? I have searched for it. But I couldn't find exact solution. I'm using jquery mobile framework. Pls help..

    Read the article

  • How to set an onClick attribute that would load dynamic content?

    - by konzepz
    This code is suppose to add an onClick event to each of the a elements, so that clicking on the element would load the content of the page dynamically into a DIV. Now, I got this far - It will add the onClick event, but how can I load the dynamic content? $(document.body).ready(function () { $("li.cat-item a").each(function (i) { this.setAttribute('onclick','alert("[load content dynamically into #preview]")'); }); }); Thank you.

    Read the article

  • How to get sites identical in content but different in language and TLD indexed by major search engi

    - by mojo77
    Hi! Is it possible to get two "editions" of a website both indexed by the major search engines (Google/Yahoo/Bing/Teoma) which differ in content language only and are hosted under different TLDs? Say English content is available at "http://domain.com/", German content at "http://domain.de/". Now, if e.g. Google.com is used I want it to list the "domain.com" entry and vice versa. Is "Duplicate Content" an issue here? Thanks so much for advising me!

    Read the article

  • How do I force a DIV block to extend to the bottom of a page even if it has no content?

    - by Vince Panuccio
    In the markup shown below, I'm trying to get the content div to stretch all the way to the bottom of the page but it's only stretching if there's content to display. The reason I want to do this is so the vertical border still appears down the page even if there isn't any content to display. Here is my code <body> <form id="form1"> <div id="header"> <a title="Home" href="index.html" /> </div> <div id="menuwrapper"> <div id="menu"> </div> </div> <div id="content"> </div> and my CSS body { font-family: Trebuchet MS, Verdana, MS Sans Serif; font-size:0.9em; margin:0; padding:0; } div#header { width: 100%; height: 100px; } #header a { background-position: 100px 30px; background: transparent url(site-style-images/sitelogo.jpg) no-repeat fixed 100px 30px; height: 80px; display: block; } #header, #menuwrapper { background-repeat: repeat; background-image: url(site-style-images/darkblue_background_color.jpg); } #menu #menuwrapper { height:25px; } div#menuwrapper { width:100% } #menu, #content { width:1024px; margin: 0 auto; } div#menu { height: 25px; background-color:#50657a; } Thanks for taking a look.

    Read the article

  • How to encode content to send them via jquery to a php file?

    - by phpheini
    I am trying to send a form to a php file via jquery. The problem is, that the content, which has to be sent to the php file, contains slashes (/) since there is bb code inside. So I tried the following: $.ajax( { type: "POST", url: "create.php", data: "content=" + encodeURIComponent(content), cache: false, success: function(message) { $("#somediv").html(message); } }); In the php file I use rawurldecode to decode the content and get my bb codes back which I can then transform into html. The problem is as soon as I put the encodeURIComponent() it will ouput: [object HTMLTextAreaElement] What does that mean, where is my mistake? Thanks for your help! phpheini

    Read the article

  • Netflix, jQuery, JSONP, and OData

    - by Stephen Walther
    At the last MIX conference, Netflix announced that they are exposing their catalog of movie information using the OData protocol. This is great news! This means that you can take advantage of all of the advanced OData querying features against a live database of Netflix movies. In this blog entry, I’ll demonstrate how you can use Netflix, jQuery, JSONP, and OData to create a simple movie lookup form. The form enables you to enter a movie title, or part of a movie title, and display a list of matching movies. For example, Figure 1 illustrates the movies displayed when you enter the value robot into the lookup form.   Using the Netflix OData Catalog API You can learn about the Netflix OData Catalog API at the following website: http://developer.netflix.com/docs/oData_Catalog The nice thing about this website is that it provides plenty of samples. It also has a good general reference for OData. For example, the website includes a list of OData filter operators and functions. The Netflix Catalog API exposes 4 top-level resources: Titles – A database of Movie information including interesting movie properties such as synopsis, BoxArt, and Cast. People – A database of people information including interesting information such as Awards, TitlesDirected, and TitlesActedIn. Languages – Enables you to get title information in different languages. Genres – Enables you to get title information for specific movie genres. OData is REST based. This means that you can perform queries by putting together the right URL. For example, if you want to get a list of the movies that were released after 2010 and that had an average rating greater than 4 then you can enter the following URL in the address bar of your browser: http://odata.netflix.com/Catalog/Titles?$filter=ReleaseYear gt 2010&AverageRating gt 4 Entering this URL returns the movies in Figure 2. Creating the Movie Lookup Form The complete code for the Movie Lookup form is contained in Listing 1. Listing 1 – MovieLookup.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Netflix with jQuery</title> <style type="text/css"> #movieTemplateContainer div { width:400px; padding: 10px; margin: 10px; border: black solid 1px; } </style> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> </head> <body> <label>Search Movies:</label> <input id="movieName" size="50" /> <button id="btnLookup">Lookup</button> <div id="movieTemplateContainer"></div> <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script> <script type="text/javascript"> $("#btnLookup").click(function () { // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); }); function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } </script> </body> </html> The HTML page in Listing 1 includes two JavaScript libraries: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> The first script tag retrieves jQuery from the Microsoft Ajax CDN. You can learn more about the Microsoft Ajax CDN by visiting the following website: http://www.asp.net/ajaxLibrary/cdn.ashx The second script tag is used to reference Resig’s micro-templating library. Because I want to use a template to display each movie, I need this library: http://ejohn.org/blog/javascript-micro-templating/ When you enter a value into the Search Movies input field and click the button, the following JavaScript code is executed: // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); This code Is used to build a query that will be executed against the Netflix Catalog API. For example, if you enter the search phrase King Kong then the following URL is created: http://odata.netflix.com/Catalog/Titles?$filter=substringof(‘King%20Kong’,Name)&$callback=callback&$format=json This query includes the following parameters: $filter – You assign a filter expression to this parameter to filter the movie results. $callback – You assign the name of a JavaScript callback method to this parameter. OData calls this method to return the movie results. $format – you assign either the value json or xml to this parameter to specify how the format of the movie results. Notice that all of the OData parameters -- $filter, $callback, $format -- start with a dollar sign $. The Movie Lookup form uses JSONP to retrieve data across the Internet. Because WCF Data Services supports JSONP, and Netflix uses WCF Data Services to expose movies using the OData protocol, you can use JSONP when interacting with the Netflix Catalog API. To learn more about using JSONP with OData, see Pablo Castro’s blog: http://blogs.msdn.com/pablo/archive/2009/02/25/adding-support-for-jsonp-and-url-controlled-format-to-ado-net-data-services.aspx The actual JSONP call is performed by calling the $.ajax() method. When this call successfully completes, the JavaScript callback() method is called. The callback() method looks like this: function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } The movie results from Netflix are passed to the callback method. The callback method takes advantage of Resig’s micro-templating library to display each of the movie results. A template used to display each movie is passed to the tmpl() method. The movie template looks like this: <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script>   This template looks like a server-side ASP.NET template. However, the template is rendered in the client (browser) instead of the server. Summary The goal of this blog entry was to demonstrate how well jQuery works with OData. We managed to use a number of interesting open-source libraries and open protocols while building the Movie Lookup form including jQuery, JSONP, JSON, and OData.

    Read the article

  • Killer content for my Kindle - The Economist with no need for an iPad - yipeee!

    - by Liam Westley
    I admin it, I was jealous of someone's iPad. They were reading The Economist, for free, as they were a print subscriber. I'm a print subscriber too. However, I don't have an iPad or an iPhone, just an Android phone and a Kindle. As soon as I got the Kindle, I looked up how to get The Economist on it. £9.99 per month. Hmmm, twice as much again as a my print subscription and I wanted to maintain the print subscription. No way Amazon. Fortunately some nice person wrote similar comments on The Economist subscription for Kindle, but added a very important additional nugget of information; and there is no need, as a print subscriber you can just use the free Calibre e-book creation tool anyway. So I downloaded it, searched for The Economist online 'recipe', entered my login name and password (part of my print subscription) and off went Calibre to screen scrape every single article from the Christmas 2010 issue into a .mobi file, complete with front cover image and full indexing. It's wonderful. Truely wonderful. Every section individually indexed, with each article separated and all inline images preserved. It even feels wonderfully retro, back to the days when The Economist only used black and white images. So many thanks the guys behind Calibre and The Economist recipe creators. Finally, I have my essential Kindle content that I've been waiting for.

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >