Search Results

Search found 26256 results on 1051 pages for 'information science'.

Page 684/1051 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Retweet button in asp.net site

    - by Zerotoinfinite
    Hi All, I am using asp.net 3.5 and C# for my personal blog site. I want to include retweet\tweet button on every post I have made, for this I have some query. I have made the account on twitter with my website name. As I want to check the individual tweets for each of my post, do I have to create new account for each post or do I have to include new list for each post. As my post has the url something like this www.mywebsite.net/myblog.aspx?id=9 , with this id I am recognising the post. Then how would I write the reference URL to the retweet button for each & every post. Thanks in advance. Please let me know if the information provided by me requires more details.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • how to match a regulas expresion like (%i1) in python pexpect

    - by mike
    I want to use maxima from python using pexpect, whenever maxima starts it will print a bunch of stuff of this form: $ maxima Maxima 5.27.0 http://maxima.sourceforge.net using Lisp SBCL 1.0.57-1.fc17 Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) i would like to start up pexpect like so: import pexpect cmd = 'maxima' child = pexpect.spawn(cmd) child.expect (' match all that stuff up to and including (%i1)') child.sendline ('integrate(sin(x),x)') chil.expect( match (%o1 ) ) print child.before how do i match the starting banner up to the prompt (%i1)? and so on, also maxima increments the (%i1)'s by one as the session goes along, so the next expect would be: child.expect ('match (%i2)') child.sendline ('integrate(sin(x),x)') chil.expect( match (%o2 ) ) print child.before how do i match the (incrementing) integers?

    Read the article

  • Entropy using Decision Tree's

    - by Matt Clements
    Train a decision tree on the data represented by attributes A1, A2, A3 and outcome C described below: A1 A2 A3 C 1 0 1 0 0 1 1 1 0 0 1 0 For log2(1/3) = 1.6 and log2(2/3) = 0.6, answer the following questions: a) What is the value of entropy H for the given set of training example? b) What is the portion of the positive samples split by attribute A2? c) What is the value of information gain, G(A2), of attribute A2? d) What is IFTHEN rule(s) for the decision tree?

    Read the article

  • Getting browser culture using javascript

    - by The Sheek Geek
    Does anyone know how to obtain the browser culture from firefox and google chrome using javascript? Note: This is an asp.net 3.5 web application. The requirement is to try and set the applications display culture based on the browser culture. I have found very few bits and pieces of information for the other browsers but they do not seem to work. I am able to get it in IE with the following snipit of code: var browserCulture = this.clientInformation.browserLanguage; Any info would be great!

    Read the article

  • perl split on empty file

    - by Casey
    I have basically the following perl I'm working with: open I,$coupon_file or die "Error: File $coupon_file will not Open: $! \n"; while (<I>) { $lctr++; chomp; my @line = split/,/; if (!@line) { print E "Error: $coupon_file is empty!\n\n"; $processFile = 0; last; } } I'm having trouble determining what the split/,/ function is returning if an empty file is given to it. The code block if (!@line) is never being executed. If I change that to be if (@line) than the code block is executed. I've read information on the perl split function over at http://perldoc.perl.org/functions/split.html and the discussion here about testing for an empty array but not sure what is going on here. I am new to Perl so am probably missing something straightforward here. Thanks.

    Read the article

  • Crashing the OS X Pasteboard

    - by Ben Packard
    I have an application that reads in text by emulating CMD-C copy commands and reading the pasteboard - unfortunately this the only way to achieve what I need. Occasionally, something goes wrong in execution (not sure yet if it's related to the copy command or not) and the app crashes. Once in a while, this has a knock on effect on the system-wide pasteboard - any other application that is running will crash if I attempt a copy, cut, or paste. Is there a robust way to handle this - something I should be doing with the NSPasteboard before exiting? Any information on what might be happening is appreciated. For completeness, here are the only snippets of code that access the pasteboard: Reading from the pasteboard: NSString *pBoardText = [[NSPasteboard generalPasteboard]stringForType:NSStringPboardType]; Initially clearing the pasteboard (I run this only once, at launch): [[NSPasteboard generalPasteboard] declareTypes: [NSArray arrayWithObject:NSStringPboardType] owner: self]; [[NSPasteboard generalPasteboard] setString: @"" forType: NSStringPboardType];

    Read the article

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • In C#, how do you send a refresh/repaint message to a WPF grid or canvas?

    - by xarzu
    How do you send a refresh message to a WPF grid or canvas? In other words, I have noticed while in debug mode, I can write code that sends a line to the display and then, if that line is not right, I can adjust it -- but the previous line is still there. Now, the code I am writing sends information to the display based on what the user clicks. So this must mean that the display is not refreshed each time a new set of lines and boxes and text goes to the grid or canvas in WPF. Using C# code, how do you send a refresh/repaint message to a WPF grid or canvas?

    Read the article

  • SQLServer 2008 Pivot

    - by Mitch
    I need to show some information in a graph, the data is held in a SQL Server 2008 table. The graph is expecting 2 columns, one for QuestionNumber and the other for Score. The table containing the data has column names that correspond to the question numbers ie A1, A2, A3, A4, B1, B2, B3, B4, C1, C2. Each question is given a score of 1 to 5. I need to show a graph where the X axis shows A1, A2, A3 etc and the Y axis shows the score. I'm thinking I somehow need to rotate the data to achive this, but I'm not sure how. Maybe a different technique can acheive this rather that a Rotate, so I'm open to any ideas.

    Read the article

  • How to add dimensions to dynamic img elements

    - by Mohammad
    I use a Json call to get a list of image addresses, then I add them individually to a div like this. <div id="container"> <img src="A.jpg" alt="" /> <img src="B.jpg" alt="" /> ... </div> Unfortunately the image dimension is not part of the Json information but I do need them for later JQuery DOM interactivity. Do any of you JQuery geniuses know of a code that would flawlessly add the width and height to the individual image elements in the container after they load? I was thinking maybe the code could wait for the images to have a width bigger than 5px then add the new width and height to the element. But I wouldn't know how to go about that and make it work stably. Thank you so much!

    Read the article

  • Sharepoint checkin/checkout

    - by Prashanth
    We have a sharepoint based application that uses a custom database for storing metadata/files (which could also be on a file share) My question is how can the standard file checkin/check out option in document library be customized? The javascript file ows.js in the layouts folder contains the functions that provide checkin/check out/ open file functionality. Behind the scenes it relies on a combination of HTTP Post/GET methods + SOAP + an activeX control to achieve the desired functionality. Customizing these javascript function seems tedious/error prone. Note that we have a web service that exposes endpoints, for retrieving necessary file information/data from the backend. The difficulty is in integrating it with the sharepoint js functions, due to lack of proper documentation. (Also the js functions might change over different versions of sharepoint) Also is it possible to create files/open files etc from the cache area on the client machine from server side code?

    Read the article

  • How to tune ASP.NET CreateUserWizard?

    - by Max
    I have created ASP.NET WebForms site on IIS 7.5. I want to create step by step user registration. I want to store the basic and detailed information about registered users in a specially created database table (not in aspnet_users table). I want to validate email first and then prevent next registration step for the user whose email address already exists in the database. At the last registration step I want to present summary form. All previous input and select fields should be duplicated in this form with "disabled" attribute. Please tell me how to adjust CreateUserWizard ASP.NET Control and web.config file to these needs?

    Read the article

  • Redirect and parse in realtime stdout of an long running process in vb.net

    - by Richard
    Hello there, This code executes "handbrakecli" (a command line application) and places the output into a string: Dim p As Process = New Process p.StartInfo.FileName = "handbrakecli" p.StartInfo.Arguments = "-i [source] -o [destination]" p.StartInfo.UseShellExecute = False p.StartInfo.RedirectStandardOutput = True p.Start Dim output As String = p.StandardOutput.ReadToEnd p.WaitForExit The problem is that this can take up to 20 minutes to complete during which nothing will be reported back to the user. Once it's completed, they'll see all the output from the application which includes progress details. Not very useful. Therefore I'm trying to find a sample that shows the best way to: Start an external application (hidden) Monitor its output periodically as it displays information about it's progress (so I can extract this and present a nice percentage bar to the user) Determine when the external application has finished (so I can't continue with my own applications execution) Kill the external application if necessary and detect when this has happened (so that if the user hits "cancel", I get take the appropriate steps) Does anyone have any recommended code snippets?

    Read the article

  • VS2010 patce: why it's take so muce time to install it?

    - by Mendy
    Visual Studio 2010 RC has a few of patches release. For more information about them take a look here. What I'm expect from patch program, is to replace a few dll's of the program to a new fixed version of them. But when I run each of this 3 patches, they take a lot of time (5 minutes each), and you think that the program was frozen because the progress bar stay on the begging. This is question may not be so important, but it really interesting me to know, why this happens? It's really confusing to see that each VS2010 (or Microsoft in general) is frozen to 4-5 minutes.

    Read the article

  • OAuth2 Flow for Mobile Devices

    - by Bart Jedrocha
    We're currently working on an API that will be consumed by a variety of different devices. We want to use the OAuth2 spec as it defines several flows which were not available in the original OAuth spec. My question is, what flow would work best for a mobile device such as the iPhone or iPad? What flow does an application like TweetDeck use? Looking around the web it seems clients like TweetDeck use the 'Username and Password Credentials Flow" (browserless token exchange). Can anyone provide more information on this topic?

    Read the article

  • Where does form processing logic belong in a MVC web application?

    - by AdamTheHutt
    In a web-based application that uses the Model-View-Controller design pattern, the logic relating to processing form submissions seems to belong somewhere in between the Model layer and the Controller layer. This is especially true in the case of a complex form (i.e. where form processing goes well beyond simple CRUD operations). What's the best way to conceptualize this? Are forms simply a kind of glue between models and controllers? Or does form logic belong squarely in the M or C camp? EDIT: I understand the basic flow of information in an MVC application (see chills42's answer for a summary). My question is where the form processing logic belongs - in the controller, in the model, or somewhere else?

    Read the article

  • Multiple Tables or Multiple Schema

    - by Yan Cheng CHEOK
    http://stackoverflow.com/questions/1152405/postgresql-is-better-using-multiple-databases-with-1-schema-each-or-1-database I am new in schema concept for PostgreSQL. For the above mentioned scenario, I was wondering Why don't we use a single database (with default schema named public) Why don't we have a single table, to store multiple users row? Other tables which hold users related information, with foreign key point to the user table. Can anyone provide me a real case scenario, which single database, multiple schema will be extremely useful, and can't solve by conventional single database, single schema.

    Read the article

  • Zend_Soap_Client doesn't work with proxy

    - by understack
    I'm accessing a SOAP web service like : $wsdl_url = 'http://abslive3.timesgroup.com:8888/clsRSchedule.soap?wsdl' ; $client = new Zend_Soap_Client($wsdl_url, array('proxy_host'=>"http://virtual-browser.25u.com" , 'proxy_port'=>80)); Since my shared server blocks port 8888, I'm using this proxy server. But Zend Soap Client tries to directly connect it. Exception information: Message: SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://abslive3.timesgroup.com:8888/clsRSchedule.soap?wsdl' : failed to load external entity "http://abslive3.timesgroup.com:8888/clsRSchedule.soap?wsdl" Stack trace: #0 /home/..../library/Zend/Soap/Client/Common.php(51): SoapClient->SoapClient('http://abslive3...', Array) #1 /home/..../library/Zend/Soap/Client.php(1024): Zend_Soap_Client_Common->__construct(Array, 'http://abslive3...', Array) #2 /home/..../library/Zend/Soap/Client.php(1180): Zend_Soap_Client->_initSoapClientObject() #3 /home/..../library/Zend/Soap/Client.php(1104): Zend_Soap_Client->getSoapClient() #4 [internal function]: Zend_Soap_Client->__call('ReturnDataSet', Array) What am I doing wrong?

    Read the article

  • Why doesn't Maven's mvn clean ever work the first time?

    - by hoffmandirt
    Nine times out of ten when I run mvn clean on my projects I experience a build error. I have to execute mvn clean multiple times until the build error goes away. Does anyone else experience this? Is there any way to fix this within Maven? If not, how do you get around it? I wrote a bat file that deletes the target folders and that works well, but it's not practical when you are working on multiple projects. I am using Maven 2.2.1. [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to delete directory: C:\Documents and Settings\user\My Documents\software-developm ent\a\b\c\application-domain\target. Reason: Unable to delete directory C:\Documen ts and Settings\user\My Documents\software-development\a\b\c\application-domai n\target\classes\com\a\b [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 seconds [INFO] Finished at: Fri Oct 23 15:22:48 EDT 2009 [INFO] Final Memory: 11M/254M [INFO] ------------------------------------------------------------------------

    Read the article

  • Ensure connection to a POSPrinter connected via COM

    - by Alexander
    Hi, I need to make sure that the connection to a POS printer is successful before writing data to the database and then printing a receipt. The POSprinter is normally of type BTP 2002NP but may differ. The common thing is that they are all connected via COM-port and NOT usb, so no drivers installed at all on the client. Can I send some kind of "ping" on a COM-port and check if a device is connected and turned on? Any help or suggestions are very much appreciated. Additional information, the application is developed in VB.net and Visual Studio 2008

    Read the article

  • How to determine if an application is using the GPU

    - by Andrew
    I'm looking for a way to determine how to know whether an application is using the GPU with Objective-C. I want to be able to determine if any applications currently running on the system have work going on on the GPU (ie: a reason why the latest MacBook Pros would switch to the discrete graphics over the Intel HD graphics). I've tried getting the information by crossing the list of active windows with the list of windows that have their backing location stored in video memory using Quartz Window Services, but all that does is return the Dock application and I have other applications open that I know are using the GPU (Photoshop CS5, Interface Builder), that and the Dock doesn't require the 330m.

    Read the article

  • Server authorization with MD5 and SQL.

    - by Charles
    I currently have a SQL database of passwords stored in MD5. The server needs to generate a unique key, then sends to the client. In the client, it will use the key as a salt then hash together with the password and send back to the server. The only problem is that the the SQL DB has the passwords in MD5 already. Therefore for this to work, I would have to MD5 the password client side, then MD5 it again with the salt. Am I doing this wrong, because it doesn't seem like a proper solution. Any information is appreciated.

    Read the article

  • Hex to Decimal conversion in C

    - by darkie15
    Hi All, Here is my code which is doing the conversion from hex to decimal. The hex values are stored in a unsigned char array: int liIndex ; long hexToDec ; unsigned char length[4]; for (liIndex = 0; liIndex < 4 ; liIndex++) { length[liIndex]= (unsigned char) *content; printf("\n Hex value is %.2x", length[liIndex]); content++; } hexToDec = strtol(length, NULL, 16); Each array element contains 1 byte of information and I have read 4 bytes. When I execute it, here is the output that I get : Hex value is 00 Hex value is 00 Hex value is 00 Hex value is 01 Chunk length is 0 Can any one please help me understand the error here. Th decimal value should have come out as 1 instead of 0. Regards, darkie

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >