Search Results

Search found 5530 results on 222 pages for 'nested urls'.

Page 98/222 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Obfuscating ids in Rails app

    - by fphilipe
    I'm trying to obfuscate all the ids that leave the server, i.e., ids appearing in URLs and in the HTML output. I've written a simple Base62 lib that has the methods encode and decode. Defining—or better—overwriting the id method of an ActiveRecord to return the encoded version of the id and adjusting the controller to load the resource with the decoded params[:id] gives me the desired result. The ids now are base62 encoded in the urls and the response displays the correct resource. Now I started to notice that subresources defined through has_many relationships aren't loading. e.g. I have a record called User that has_many Posts. Now User.find(1).posts is empty although there are posts with user_id = 1. My explanation is that ActiveRecord must be comparing the user_id of Post with the method id of User—which I've overwritten—instead of comparing with self[:id]. So basically this renders my approach useless. What I would like to have is something like defining obfuscates_id in the model and that the rest would be taken care of, i.e., doing all the encoding/decoding at the appropriate locations and preventing ids to be returned by the server. Is there any gem available or does somebody have a hint how to accomplish this? I bet I'm not the first trying this.

    Read the article

  • Download Remote File

    - by Abs
    Hello all, I have a function that will be passed in a link. The link is to a remote image. I thought I could just use the extension of the file in the URL to determine the type of image but some URLs won't have extensions in the URL. They probably just push headers to the browser and therefore I do not have an extension to parse from the URL. How can I test if the URL has an extension and if not then read the headers to determine the file type? Am I over complicating things here? Is there an easier way to do this? I am making use of Codeigniter maybe there is something already built in to do this? All I really want to do is download an image from a URL with the correct extension. This is what I have so far. function get_image($image_link){ $remoteFile = $image_link; $ext = ''; //some URLs might not have an extension $file = fopen($remoteFile, "r"); if (!$file) { return false; }else{ $line = ''; while (!feof ($file)) { $line .= fgets ($file, 4096); } $file_name = time().$ext; file_put_contents($file_name, $line); } fclose($file); } Thanks all for any help

    Read the article

  • Stop Rewrite htaccess create random pages

    - by Vistol
    Recently I saw in my Webmaster tools that some random sites are linking to my site. Actually this is not an big issue. The issue comes when the pages that are linked are not real pages because of my httaccess file. This is the htaccess code that Im running: <pre> #Options +FollowSymLinks RewriteEngine on RewriteRule ^([^/\.]+)/?$ index.php?id=$1 [L] RewriteRule ^([0-9]+)/(.*)$ index.php?id=$1 [L] </pre> So the real URLs would be: mysite.com/folder/999/TITLE-OR-NAME But cecause I only check the 1st folder ($1) which is an I numberD, this htaccess file is allowing hackers linking to my site with random URLs like: mysite.com/folder/999/TITLE-OR-NAME1 mysite.com/folder/999/TITLE-OR-NAME2 mysite.com/folder/999/TITLE-OR-NAME3 mysite.com/folder/999/TITLE-OR-NAME4 mysite.com/folder/999/TITLE-OR-NAME5 The worst part comes when google tells me that I am duplicating content!!! Actually I am not duplicating content, the htaccess is duplicating it for me. And yes I know, Im a bad newbie programmer but Id really appreciate your help with this cause Im struggling to find a solution but it never. Thank you very much for all your support to this newbie :)

    Read the article

  • trouble with custom 'Text Bubble' component (examples included)

    - by gmoniey
    I'm trying to use a custom Text component to show a series of comments. I got the original idea from: http://www.eonflex.com/?p=40 I've got the base case working but I am stuck with 2 problems I cant seem to figure out: Since I am drawing around the text, the actual height of each bubble is greater than that of the Text field, as a result, the last bubble is always chopped off. I have tried explicitly overriding the height getter, and adding some padding, but I cant seem to get it right. You can see an example here: http://test.lambandtunafish.com/bubbles/CommentTest.swf In my layout, I have 2 VBoxes (one nested inside the other). The first vbox shows a form where the user can enter a comment, and the second box has all the comments. In order to ensure that the scrollbars only show up on the second box, I set minHeight="0" on the nested VBox, but then for some reason, some comments' text is shifted to the right. You can see an example here (look at the first comment): http://test.lambandtunafish.com/bubbles/CommentTest-minHeight.swf Rather than posting the code here, I've provided some links: Container: http://test.lambandtunafish.com/bubbles/CommentTest.mxml Bubble: http://test.lambandtunafish.com/bubbles/CommentBubble.as If anyone has any ideas, I would appreciate it. Thanks!

    Read the article

  • Trouble with go tour crawler exercise

    - by David Mason
    I'm going through the go tour and I feel like I have a pretty good understanding of the language except for concurrency. On slide 71 there is an exercise that asks the reader to parallelize a web crawler (and to make it not cover repeats but I haven't gotten there yet.) Here is what I have so far: func Crawl(url string, depth int, fetcher Fetcher, ch chan string) { if depth <= 0 { return } body, urls, err := fetcher.Fetch(url) if err != nil { ch <- fmt.Sprintln(err) return } ch <- fmt.Sprintf("found: %s %q\n", url, body) for _, u := range urls { go Crawl(u, depth-1, fetcher, ch) } } func main() { ch := make(chan string, 100) go Crawl("http://golang.org/", 4, fetcher, ch) for i := range ch { fmt.Println(i) } } The issue I have is where to put the close(ch) call. If I put a defer close(ch) somewhere in the Crawl method, then I end up writing to a closed channel in one of the spawned goroutines, since the method will finish execution before the spawned goroutines do. If I omit the call to close(ch), as is shown in my example code, the program deadlocks after all the goroutines finish executing but the main thread is still waiting on the channel in the for loop since the channel was never closed.

    Read the article

  • How do you handle huge if-conditions?

    - by Teifion
    It's something that's bugged me in every language I've used, I have an if statement but the conditional part has so many checks that I have to split it over multiple lines, use a nested if statement or just accept that it's ugly and move on with my life. Are there any other methods that you've found that might be of use to me and anybody else that's hit the same problem? Example, all on one line: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true){ Example, multi-line: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true){ Example-nested: if (var1 = true && var2 = true && var2 = true && var3 = true){     if (var4 = true && var5 = true && var6 = true)     {

    Read the article

  • Subitems are not all added to a list view in C# using XmlNodeList

    - by tim
    I'm working on extracting data from an RSS feed. In my listview (rowNews), I've got two columns: Title and URL. When the button is clicked, all of the titles of the articles are showing up in the title column, but only one URL is added to the URL column. I switched them around so that the URLs would be added to the first column and all of the correct URLs appeared... leading me to think this is a problem with my listview source (it's my first time working with subitems). Here's the original, before I started experimenting with the order: private void button1_Click(object sender, EventArgs e) { XmlTextReader rssReader = new XmlTextReader(txtUrl.Text); XmlDocument rssDoc = new XmlDocument(); rssDoc.Load(rssReader); XmlNodeList titleList = rssDoc.GetElementsByTagName("title"); XmlNodeList urlList = rssDoc.GetElementsByTagName("link"); ListViewItem lvi = new ListViewItem(); for (int i = 0; i < titleList.Count; i++) { rowNews.Items.Add(titleList[i].InnerXml); } for (int i = 0; i < urlList.Count; i++) { lvi.SubItems.Add(urlList[i].InnerXml); } rowNews.Items.Add(lvi); }

    Read the article

  • mod_rewrite: no access to real files and directories

    - by tshabalala
    Hello. I use mod_rewrite/.htaccess for pretty URLs. I forward all the requests to my index.php, like this: RewriteRule ^/?([a-zA-Z0-9/-]+)/?$ /index.php [NC,L] The index.php then handles the requests. I'm also using this condition/rule to eliminate trailing slashes (or rather rewrite them to the URL without a trailing slash, with a 301 redirect; I'm doing this to avoid duplicate content and because I like no trailing slashes better): RewriteCond %{HTTP_HOST} !^\.localhost$ [NC] RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] This works well, except that I now get an infinite loop when trying to access a (real) directory (the rewrite rule removes the trailing slash, the server adds it again, ...). I solved this by setting the DirectorySlash directive to Off: DirectorySlash Off I don't know how good this solution is, I don't feel too confident about it tbh. Anyway, what I'd like to do is completely ignore "real" files and directories, since I don't need them and I only use pretty URLs with "virtual" files/directories anyway. This would allow me to avoid the DirectorySlash workaround/hack too. Is this possible? Thanks!

    Read the article

  • md5_file() not working with IP addresses?

    - by Rob
    Here is my code relating to the question: $theurl = trim($_POST['url']); $md5file = md5_file($theurl); if ($md5file != '96a0cec80eb773687ca28840ecc67ca1') { echo 'Hash doesn\'t match. Incorrect file. Reupload it and try again'; When I run this script, it doesn't even output an error. It just stops. It loads for a bit, and then it just stops. Further down the script I implement it again, and it fails here, too: while($row=mysql_fetch_array($execquery, MYSQL_ASSOC)){ $hash = @md5_file($row['url']); $url = $row['url']; mysql_query("UPDATE urls SET hash='" . $hash . "' WHERE url='" . $url . "'") or die("MYSQL is indeed gay: ".mysql_error()); if ($hash != '96a0cec80eb773687ca28840ecc67ca1'){ $status = 'down'; }else{ $status = 'up'; } mysql_query("UPDATE urls SET status='" . $status . "' WHERE url='" . $url . "'") or die("MYSQL is indeed gay: ".mysql_error()); } And it checks all the URL's just fine, until it gets to one with an IP instead of a domain, such as: http://188.72.215.195/config.php In which, again, the script then just loads for a bit, and then stops. Any help would be much appreciated, if you need any more information just ask.

    Read the article

  • Concatenate an each loop inside another

    - by Lothar
    I want to to concatenate the results of a jquery each loop inside of another but am not getting the results I expect. $.each(data, function () { counter++; var i = 0; var singlebar; var that = this; tableRow = '<tr>' + '<td>' + this.foo + '</td>' + $.each(this.bar, function(){ singlebar = '<td>' + that.bar[i].baz + '</td>'; tableRow + singlebar; }); '</tr>'; return tableRow; }); The portion inside the nested each does not get added to the string that is returned. I can console.log(singlebar) and get the expected results in the console but I cannot concatenate those results inside the primary each loop. I have also tried: $.each(this.bar, function(){ tableRow += '<td>' + that.bar[i].baz + '</td>'; }); Which also does not add the desired content. How do I iterate over this nested data and add it in the midst of the table that the primary each statement is building?

    Read the article

  • Twitter Bootstrap: how to put unknown number of span* within a row-fluid?

    - by StackOverflowNewbie
    Assume I have the following nesting: <div class="cointainer-fluid"> <div class="row-fluid"> <div class="span3"> <!-- left sidebar here --> </div> <div class="span9"> <!-- main content here --> </div> </div> </div> I'd like to put an unknown number of <div class="span3"></div> in the main content area. (Each of the span3 is suppose to contain a product photo, name, price, etc.) Of course, my aim is that be responsive. So, I might display 20 products, which I'd like to possibly display 5 products per "row" on a wide screen, then 4 products per "row" on a slightly less wide screen, then 3, then 2, then 1. For example (each X represents a product): Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X row 4: X X X X X Less Wide Screen row 1: X X X X row 2: X X X X row 3: X X X X row 4: X X X X row 5: X X X X Even Less Wide Screen row 1: X X X row 2: X X X row 3: X X X row 4: X X X row 5: X X X row 6: X X X row 7: X X It seems like I need to do nested rows. However, if I do that, then I'll only be able to fit a certain amount of products in each nested row. That'll cause problems as the screen width decreases, for example (each X represents a product): Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X Less Wide Screen row 1: X X X X X row 2: X X X X X row 3: X X X X X How do I do what I want to do in Twitter Bootstrap?

    Read the article

  • Apache's AuthDigestDomain and Rails Distributed Asset Hosts

    - by Jared
    I've got a server I'm in the process of setting up and I'm running into an Apache configuration problem that I can not get around. I've got Apache 2.2 and Passenger serving a Rails app with distributed asset hosting. This is the feature of Rails that lets you serve your static assets from assets0.example.com, assets1, assets2, and so on. The site needs to be passworded until launch. I've set up HTTP authentication on the site using Apache's mod_auth_digest. In my configuration I'm attempting to use the AuthDigestDomain directive to allow access to each of the asset URLs. The problem is, it doesn't seem to be working. I get the initial prompt for the password when I load the page, but then the first time it loads an asset from one of the asset URLs, I get prompted a 2nd, 3rd, or 4th time. In some browsers, I get prompted for every single resource on the page. I'm hoping that this is only a problem of how I'm specifying my directives and not a limitation of authorization in Apache itself. See the edited auth section below: <Location /> AuthType Digest AuthName "Restricted Site" AuthUserFile /etc/httpd/passwd/passwords AuthGroupFile /dev/null AuthDigestDomain / http://assets0.example.com/ http://assets1.example.com/ http://assets2.example.com/ http://assets3.example.com/ require valid-user order deny,allow allow from all </Location>

    Read the article

  • can't create partial objects with accepts_nested_attributes_for

    - by Isaac Cambron
    I'm trying to build a form that allows users to update some records. They can't update every field, though, so I'm going to do some explicit processing (in the controller for now) to update the model vis-a-vis the form. Here's how I'm trying to do it: Family model: class Family < ActiveRecord::Base has_many :people, dependent: :destroy accepts_nested_attributes_for :people, allow_destroy: true, reject_if: ->(p){p[:name].blank?} end In the controller def check edited_family = Family.new(params[:family]) #compare to the one we have in the db #update each person as needed/allowed #save it end Form: = form_for current_family, url: check_rsvp_path, method: :post do |f| = f.fields_for :people do |person_fields| - if person_fields.object.user_editable = person_fields.text_field :name, class: "person-label" - else %p.person-label= person_fields.object.name The problem is, I guess, that Family.new(params[:family]) tries to pull the people out of the database, and I get this: ActiveRecord::RecordNotFound in RsvpsController#check Couldn't find Person with ID=7 for Family with ID= That's, I guess, because I'm not adding a field for family id to the nested form, which I suppose I could do, but I don't actually need it to load anything from the database for this anyway, so I'd rather not. I could also hack around this by just digging through the params hash myself for the data I need, but that doesn't feel a slick. It seems nicest to just create an object out of the params hash and then work with it. Is there a better way? How can I just create the nested object?

    Read the article

  • How do I automatically update hundreds of images in an HTML page using jquery?

    - by Chris
    I have an HTML page where I want to refresh a lot of images every 30 seconds after the HTML page has been downloaded. I understand how to do this with Jquery and a single image, but I want to use about 200 custom urls to determine the current image to display for over 200 images. I need to find an efficient way to have jquery call the custom url associated with each image to download the url for the needed image as it changes, and then update the image in the page when it changes. Current hyperlink example to demonstrate the custom urls. <A href="/urlThatReturnsCurrentImageURL/1234/4567">link to url for image</A> Each custom url will return an image tag like this (or any other text that makes this simpler for jquery) <img src="/static/someImage.jpg"> What is the simplest way to have jquery call the custom url for each image to download the image url, image html, or some other text that jquery can use to download the right image every 30 seconds? Please keep in mind that I will have about 200 of these on a page.

    Read the article

  • Cleaner method for list comprehension clean-up

    - by Dan McGrath
    This relates to my previous question: Converting from nested lists to a delimited string I have an external service that sends data to us in a delimited string format. It is lists of items, up to 3 levels deep. Level 1 is delimited by '|'. Level 2 is delimited by ';' and level 3 is delimited by ','. Each level or element can have 0 or more items. An simplified example is: a,b;c,d|e||f,g|h;; We have a function that converts this to nested lists which is how it is manipulated in Python. def dyn_to_lists(dyn): return [[[c for c in b.split(',')] for b in a.split(';')] for a in dyn.split('|')] For the example above, this function results in the following: >>> dyn = "a,b;c,d|e||f,g|h;;" >>> print (dyn_to_lists(dyn)) [[['a', 'b'], ['c', 'd']], [['e']], [['']], [['f', 'g']], [['h'], [''], ['']]] For lists, at any level, with only one item, we want it as a scalar rather than a 1 item list. For lists that are empty, we want them as just an empty string. I've came up with this function, which does work: def dyn_to_min_lists(dyn): def compress(x): return "" if len(x) == 0 else x if len(x) != 1 else x[0] return compress([compress([compress([item for item in mv.split(',')]) for mv in attr.split(';')]) for attr in dyn.split('|')]) Using this function and using the example above, it returns: [[['a', 'b'], ['c', 'd']], 'e', '', ['f', 'g'], ['h', '', '']] Being new to Python, I'm not confident this is the best way to do it. Are there any cleaner ways to handle this? This will potentially have large amounts of data passing through it, are there any more efficient/scalable ways to achieve this?

    Read the article

  • Convert Markdown text to RTF, using Ruby and Pandoc?

    - by niteshade
    Playing with Ruby and Ruby-Pandoc. Seems like a nice tool, if I can get it to work. I'd like to convert some Markdown text (with embedded lists and other fanciness) to Rich Text. Here's the text I'm converting: Title === This is a paragraph. Hallelujah. Here comes a nested list. --- * List item 1 * List item 1.1 * List item 1.2 * List item 2 * List item 2.1 Here's my Ruby code... require 'pandoc-ruby' input = File.read(test.md) converter = PandocRuby.new(input, from: :markdown, to: :rtf) puts converter.convert ... which (after saving the output to a file) produces a document without anything but a title: Here's the code of the RTF file: {\pard \ql \f0 \sa180 \li0 \fi0 \b \fs36 Title\par} {\pard \ql \f0 \sa180 \li0 \fi0 This is a paragraph. Hallelujah.\par} {\pard \ql \f0 \sa180 \li0 \fi0 \b \fs32 Here comes a nested list.\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1.1\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 1.2\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 2\par} {\pard \ql \f0 \sa0 \li360 \fi-360 \bullet \tx360\tab List item 2.1\sa180\par} In addition, even if it did show up in my RTF viewer (Mac TextEdit), the RTF code seems to have lost all list nesting. I don't know how to diagnose this, whether I have not stated necessary header information or something in Ruby-Pandoc. Thanks in advance!

    Read the article

  • How can I pull multiple rows from a MySQL table and use all of them automatically for the same thing

    - by Rob
    Basically, I have multiple URL's stored in a MySQL table. I want to pull those URLs from the table and have cURL connect to all of them. Currently I've been storing the URL's in the local script, but I've added a new page that I can add and remove them from the database, and I'd like the page to reflect it appropriately. Here is what I currently have: $sites[0]['url'] = "http://example0.com "; $sites[1]['url'] = "http://example1.com"; $sites[2]['url'] = "http://example2.com"; $sites[3]['url'] = "http://example3.com"; foreach($sites as $s) { // Now for some cURL to run it. $ch = curl_init($s['url']); //load the urls and send GET data curl_setopt($ch, CURLOPT_TIMEOUT, 2); //No need to wait for it to load. Execute it and go. curl_exec($ch); //Execute curl_close($ch); //Close it off } Now I assume it can't be too amazingly difficult to do, I just don't know how. So if you could point me in the right direction, I'd be grateful. But if you supply me with some code, please comment it appropriately so that I can understand what each line is doing.

    Read the article

  • Stopping Wordpress From Appending /index.html to everything

    - by user439796
    I have a spaghetti code of a theme I inherited from someone and for whatever reason Google Analytics shows that I keep getting hits to a variety of URLs on the site, but the urls are all appended with /index.html. So an example would be like http://www.mysite.com/category/storyname/index.html And it appears to be doing this to almost everything (despite my permalinks being set to be "tidy"). So... What in the hell could be possibly causing this? How do I fix it? When I visit all those pages I get 404 errors so that means my visitors are not getting what they want. I have the Redirection plugin and have been manually trying to update some of these, but it is ridiculous. I'm sure there's a way to do it with htaccess but I know next to nothing about that. Here's what my htaccess currently has (the default): # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Update paths of already-created Paperclip attachments

    - by Horace Loeb
    I used to have this buggy Paperclip config: class Photo < ActiveRecord::Base has_attached_file :image, :storage => :s3, :styles => { :medium => "600x600>", :small => "320x320>", :thumb => "100x100#" }, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => "/:style/:filename" end This is buggy because two images cannot have the same size and filename. To fix this, I changed the config to: class Photo < ActiveRecord::Base has_attached_file :image, :storage => :s3, :styles => { :medium => "600x600>", :small => "320x320>", :thumb => "100x100#" }, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => "/:style/:id_:filename" end Unfortunately this breaks all URLs to attachments I've already created. How can I update those file paths or otherwise get the URLs to work?

    Read the article

  • inserts 'Array' into mysql table

    - by Noah Smith
    i want to insert an array into a mysql table. The array is produced by script scanning all the links, converting into absolute links and then displaying them in an array. i decided to mysql_query the array into the table but now i am stuck. it only posts 'Array', instead of every row from the array into a different row. Any ideas??! <?php require_once('simplehtmldom_1_5/simple_html_dom.php'); require_once('url_to_absolute/url_to_absolute.php'); $connect = mysql_connect("xxxx", "xxxx", "xxx") or die('Couldn\'t connect to MySQL Server: ' . mysql_error()); mysql_select_db("xxxx", $connect ) or die('Couldn\'t Select the database: ' . mysql_error( $connect )); $links = Array(); $URL = 'http://www.theqlick.com'; // change it for urls to grab // grabs the urls from URL $file = file_get_html($URL); foreach ($file->find('a') as $theelement) { $links[] = url_to_absolute($URL, $theelement->href); } print_r($links); mysql_query("INSERT INTO pages (url) VALUES ('$links[]')"); mysql_close($connect);

    Read the article

  • Multiple collections tied to one base collection with filters and eventing

    - by damienc88
    I have a complex model served from my back end, which has a bunch of regular attributes, some nested models, and a couple of collections. My page has two tables, one for invalid items, and one for valid items. The items in question are from one of the nested collections. Let's call it baseModel.documentCollection, implementing DocumentsCollection. I don't want any filtration code in my Marionette.CompositeViews, so what I've done is the following (note, duplicated for the 'valid' case): var invalidDocsCollection = new DocumentsCollection( baseModel.documentCollection.filter(function(item) { return !item.isValidItem(); }) ); var invalidTableView = new BookIn.PendingBookInRequestItemsCollectionView({ collection: app.collections.invalidDocsCollection }); layout.invalidDocsRegion.show(invalidTableView); This is fine for actually populating two tables independently, from one base collection. But I'm not getting the whole event pipeline down to the base collection, obviously. This means when a document's validity is changed, there's no neat way of it shifting to the other collection, therefore the other view. What I'm after is a nice way of having a base collection that I can have filter collections sit on top of. Any suggestions?

    Read the article

  • jquery problem with toggle event only fireing on the 2nd click...

    - by Ronedog
    Can anyone explain why the following jquery only fires the 2nd toggle event and how to fix it? Specifically, every time I click the nested < a element it brings up the alert "2nd click". I tested the selector to make sure it was selecting the element properly and it does, or at least it inserted a class without any problems. The selector is selecting the very last node in the unordered list that has an anchor tag. $("#nav li:not(:has(li)) a").toggle(function() { //1st click alert("1st Click"); }, function() { //2nd click alert("2nd Click"); }); Nested HTML structure that fails: <ul id="nav"> <li> <span>stuff</span> <a href="#">Cat 1</a> <ul> <li> <span>stuff</span> <a href="#">Subcat1</a> <ul> <li> <span>Stuff</span> <a href="#">Subcat Details</a> </li> </ul> </li> </ul> </li> </ul> However, this works right and fires both click events: <ul id="nav"> <li> <span>stuff</span> <a href="#">Cat 1</a> </li> </ul>

    Read the article

  • Loading jQuery Consistently in a .NET Web App

    - by Rick Strahl
    One thing that frequently comes up in discussions when using jQuery is how to best load the jQuery library (as well as other commonly used and updated libraries) in a Web application. Specifically the issue is the one of versioning and making sure that you can easily update and switch versions of script files with application wide settings in one place and having your script usage reflect those settings in the entire application on all pages that use the script. Although I use jQuery as an example here, the same concepts can be applied to any script library - for example in my Web libraries I use the same approach for jQuery.ui and my own internal jQuery support library. The concepts used here can be applied both in WebForms and MVC. Loading jQuery Properly From CDN Before we look at a generic way to load jQuery via some server logic, let me first point out my preferred way to embed jQuery into the page. I use the Google CDN to load jQuery and then use a fallback URL to handle the offline or no Internet connection scenario. Why use a CDN? CDN links tend to be loaded more quickly since they are very likely to be cached in user's browsers already as jQuery CDN is used by many, many sites on the Web. Using a CDN also removes load from your Web server and puts the load bearing on the CDN provider - in this case Google - rather than on your Web site. On the downside, CDN links gives the provider (Google, Microsoft) yet another way to track users through their Web usage. Here's how I use jQuery CDN plus a fallback link on my WebLog for example: <!DOCTYPE HTML> <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> <script> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript " + "src='/Weblog/wwSC.axd?r=Westwind.Web.Controls.Resources.jquery.js' %3E%3C/script%3E")); </script> <title>Rick Strahl's Web Log</title> ... </head>   You can see that the CDN is referenced first, followed by a small script block that checks to see whether jQuery was loaded (jQuery object exists). If it didn't load another script reference is added to the document dynamically pointing to a backup URL. In this case my backup URL points at a WebResource in my Westwind.Web  assembly, but the URL can also be local script like src="/scripts/jquery.min.js". Important: Use the proper Protocol/Scheme for  for CDN Urls [updated based on comments] If you're using a CDN to load an external script resource you should always make sure that the script is loaded with the same protocol as the parent page to avoid mixed content warnings by the browser. You don't want to load a script link to an http:// resource when you're on an https:// page. The easiest way to use this is by using a protocol relative URL: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> which is an easy way to load resources from other domains. This URL syntax will automatically use the parent page's protocol (or more correctly scheme). As long as the remote domains support both http:// and https:// access this should work. BTW this also works in CSS (with some limitations) and links. BTW, I didn't know about this until it was pointed out in the comments. This is a very useful feature for many things - ah the benefits of my blog to myself :-) Version Numbers When you use a CDN you notice that you have to reference a specific version of jQuery. When using local files you may not have to do this as you can rename your private copy of jQuery.js, but for CDN the references are always versioned. The version number is of course very important to ensure you getting the version you have tested with, but it's also important to the provider because it ensures that cached content is always correct. If an existing file was updated the updates might take a very long time to get past the locally cached content and won't refresh properly. The version number ensures you get the right version and not some cached content that has been changed but not updated in your cache. On the other hand version numbers also mean that once you decide to use a new version of the script you now have to change all your script references in your pages. Depending on whether you use some sort of master/layout page or not this may or may not be easy in your application. Even if you do use master/layout pages, chances are that you probably have a few of them and at the very least all of those have to be updated for the scripts. If you use individual pages for all content this issue then spreads to all of your pages. Search and Replace in Files will do the trick, but it's still something that's easy to forget and worry about. Personaly I think it makes sense to have a single place where you can specify common script libraries that you want to load and more importantly which versions thereof and where they are loaded from. Loading Scripts via Server Code Script loading has always been important to me and as long as I can remember I've always built some custom script loading routines into my Web frameworks. WebForms makes this fairly easy because it has a reasonably useful script manager (ClientScriptManager and the ScriptManager) which allow injecting script into the page easily from anywhere in the Page cycle. What's nice about these components is that they allow scripts to be injected by controls so components can wrap up complex script/resource dependencies more easily without having to require long lists of CSS/Scripts/Image includes. In MVC or pure script driven applications like Razor WebPages  the process is more raw, requiring you to embed script references in the right place. But its also more immediate - it lets you know exactly which versions of scripts to use because you have to manually embed them. In WebForms with different controls loading resources this often can get confusing because it's quite possible to load multiple versions of the same script library into a page, the results of which are less than optimal… In this post I look a simple routine that embeds jQuery into the page based on a few application wide configuration settings. It returns only a string of the script tags that can be manually embedded into a Page template. It's a small function that merely a string of the script tags shown at the begging of this post along with some options on how that string is comprised. You'll be able to specify in one place which version loads and then all places where the help function is used will automatically reflect this selection. Options allow specification of the jQuery CDN Url, the fallback Url and where jQuery should be loaded from (script folder, Resource or CDN in my case). While this is specific to jQuery you can apply this to other resources as well. For example I use a similar approach with jQuery.ui as well using practically the same semantics. Providing Resources in ControlResources In my Westwind.Web Web utility library I have a class called ControlResources which is responsible for holding resource Urls, resource IDs and string contants that reference those resource IDs. The library also provides a few helper methods for loading common scriptscripts into a Web page. There are specific versions for WebForms which use the ClientScriptManager/ScriptManager and script link methods that can be used in any .NET technology that can embed an expression into the output template (or code for that matter). The ControlResources class contains mostly static content - references to resources mostly. But it also contains a few static properties that configure script loading: A Script LoadMode (CDN, Resource, or script url) A default CDN Url A fallback url They are  static properties in the ControlResources class: public class ControlResources { /// <summary> /// Determines what location jQuery is loaded from /// </summary> public static JQueryLoadModes jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryCdnUrl = "//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryUiCdnUrl = "//ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"; /// <summary> /// jQuery UI fallback Url if CDN is unavailable or WebResource is used /// Note: The file needs to exist and hold the minimized version of jQuery ui /// </summary> public static string jQueryUiLocalFallbackUrl = "~/scripts/jquery-ui.min.js"; } These static properties are fixed values that can be changed at application startup to reflect your preferences. Since they're static they are application wide settings and respected across the entire Web application running. It's best to set these default in Application_Init or similar startup code if you need to change them for your application: protected void Application_Start(object sender, EventArgs e) { // Force jQuery to be loaded off Google Content Network ControlResources.jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; // Allow overriding of the Cdn url ControlResources.jQueryCdnUrl = "http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"; // Route to our own internal handler App.OnApplicationStart(); } With these basic settings in place you can then embed expressions into a page easily. In WebForms use: <!DOCTYPE html> <html> <head runat="server"> <%= ControlResources.jQueryLink() %> <script src="scripts/ww.jquery.min.js"></script> </head> In Razor use: <!DOCTYPE html> <html> <head> @Html.Raw(ControlResources.jQueryLink()) <script src="scripts/ww.jquery.min.js"></script> </head> Note that in Razor you need to use @Html.Raw() to force the string NOT to escape. Razor by default escapes string results and this ensures that the HTML content is properly expanded as raw HTML text. Both the WebForms and Razor output produce: <!DOCTYPE html> <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634535391996872492' type='text/javascript'%3E%3C/script%3E"));</script> <script src="scripts/ww.jquery.min.js"></script> </head> which produces the desired effect for both CDN load and fallback URL. The implementation of jQueryLink is pretty basic of course: /// <summary> /// Inserts a script link to load jQuery into the page based on the jQueryLoadModes settings /// of this class. Default load is by CDN plus WebResource fallback /// </summary> /// <param name="url"> /// An optional explicit URL to load jQuery from. Url is resolved. /// When specified no fallback is applied /// </param> /// <returns>full script tag and fallback script for jQuery to load</returns> public static string jQueryLink(JQueryLoadModes jQueryLoadMode = JQueryLoadModes.Default, string url = null) { string jQueryUrl = string.Empty; string fallbackScript = string.Empty; if (jQueryLoadMode == JQueryLoadModes.Default) jQueryLoadMode = ControlResources.jQueryLoadMode; if (!string.IsNullOrEmpty(url)) jQueryUrl = WebUtils.ResolveUrl(url); else if (jQueryLoadMode == JQueryLoadModes.WebResource) { Page page = new Page(); jQueryUrl = page.ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); } else if (jQueryLoadMode == JQueryLoadModes.ContentDeliveryNetwork) { jQueryUrl = ControlResources.jQueryCdnUrl; if (!string.IsNullOrEmpty(jQueryCdnUrl)) { // check if jquery loaded - if it didn't we're not online and use WebResource fallbackScript = @"<script type=""text/javascript"">if (typeof(jQuery) == 'undefined') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));</script>"; fallbackScript = string.Format(fallbackScript, WebUtils.ResolveUrl(ControlResources.jQueryCdnFallbackUrl)); } } string output = "<script src=\"" + jQueryUrl + "\" type=\"text/javascript\"></script>"; // add in the CDN fallback script code if (!string.IsNullOrEmpty(fallbackScript)) output += "\r\n" + fallbackScript + "\r\n"; return output; } There's one dependency here on WebUtils.ResolveUrl() which resolves Urls without access to a Page/Control (another one of those features that should be in the runtime, not in the WebForms or MVC engine). You can see there's only a little bit of logic in this code that deals with potentially different load modes. I can load scripts from a Url, WebResources or - my preferred way - from CDN. Based on the static settings the scripts to embed are composed to be returned as simple string <script> tag(s). I find this extremely useful especially when I'm not connected to the internet so that I can quickly swap in a local jQuery resource instead of loading from CDN. While CDN loading with the fallback works it can be a bit slow as the CDN is probed first before the fallback kicks in. Switching quickly in one place makes this trivial. It also makes it very easy once a new version of jQuery rolls around to move up to the new version and ensure that all pages are using the new version immediately. I'm not trying to make this out as 'the' definite way to load your resources, but rather provide it here as a pointer so you can maybe apply your own logic to determine where scripts come from and how they load. You could even automate this some more by using configuration settings or reading the locations/preferences out of some sort of data/metadata store that can be dynamically updated instead via recompilation. FWIW, I use a very similar approach for loading jQuery UI and my own ww.jquery library - the same concept can be applied to any kind of script you might be loading from different locations. Hopefully some of you find this a useful addition to your toolset. Resources Google CDN for jQuery Full ControlResources Source Code ControlResource Documentation Westwind.Web NuGet This method is part of the Westwind.Web library of the West Wind Web Toolkit or you can grab the Web library from NuGet and add to your Visual Studio project. This package includes a host of Web related utilities and script support features. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >