Search Results

Search found 29712 results on 1189 pages for 'css content'.

Page 22/1189 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Downloading specific video renditions in WebCenter Content

    - by Kyle Hatlestad
    I recently had a question come up on one of my previous blog articles about downloading a specific video rendition.  When accessing image renditions, you simply need to pass in the 'Rendition=<rendition name>' parameter on the GET_FILE service and it will be returned.  But when you try that with videos, you get the error message, "Unable to download '<Content ID>'. The rendition or attachment '<Rendition Name>' could not be found in the list manifest of the revision with internal revision ID '<dID>'. [Read More] 

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. [Read More]

    Read the article

  • Will my site containing duplicate content be accepted in Adsense

    - by user5858
    I've a new site just over 6 months with 50 unique visitors daily. It has good amount of duplicate pages which are not copyrighted. For example I've copied related companies product FAQ's "as is" in the site. Moreover I'm not supposed to modify a company's product's faqs. I fear my login may be banned by Adsense if I submit it. So I want to know: 1) Whether I can submit it for Adsense account 2) Whether Google can penalize me and in what way 3) How would Google come to know that the duplicate content on my site is not copyrighted?

    Read the article

  • Content Optimization only?

    - by danie7L T
    There are tons of discussions around tips&tricks to improve Search Engines "ranking" and SEOs. What if the focus of the webmaster/client is 100% set on the quality of the content with precise keywords in meta tags, clean design, regular articles updates, clean URLs and highly filtered external links leading to pages on websites dealing on the same,or related subjects; isn't it the job of a good search engine like Google to catch this website and show it in its front-page ? Or does Search Engines count on us to help them find us, and webmasters will always have to be up-to-date regarding SEO tools and rules updates on top of websites' design, browsers customization, progressive enhancement etc ?

    Read the article

  • Alternating links and slide down content on a list of sub sections

    - by user27291
    I have a page for a doctor's practice. In the summary page for the practice there is a list of subsections such as Women's, Men's, Children, Sport, etc. Some of these sub sections are very large, others can be a paragraph or more with a short unordered list. In terms of content volume, the large subsections warrant their own separate page, the smaller one's not so much. I created a little plugin which enables me to use the list of subsections in 2 ways. When clicking on the title of a larger section, you'll be sent through to it's own page. For the smaller sections, a slide down box will open with the information. Is this a good way to handle my information architecture? Should I be giving the smaller sub sections their own page for SEO purposes?

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Using XNA's XML content pipeline to read arrays of objects with different subtypes

    - by Mcguirk
    Using XNA's XML content importer, is it possible to read in an array of objects with different subtypes? For instance, assume these are my class definitions: public abstract class MyBaseClass { public string MyBaseData; } public class MySubClass0 : MyBaseClass { public int MySubData0; } public class MySubClass1 : MyBaseClass { public bool MySubData1; } And this is my XML file: <XnaContent> <Asset Type="MyBaseClass[]"> <Item> <!-- I want this to be an instance of MySubClass0 --> <MyBaseData>alpha</MyBaseData> <MySubData0>314</MySubData0> </Item> <Item> <!-- I want this to be an instance of MySubClass1 --> <MyBaseData>bravo</MyBaseData> <MySubData1>true</MySubData1> </Item> </Asset> </XnaContent> How do I specify that I want the first Item to be an instance of MySubclass0 and the second Item to be an instance of MySubclass1?

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • Webmaster Tools, www and no-www, duplicate content and subdomains

    - by Jay
    I have not come to any conclusive answers on the following after many hours of research on many websites to the specific issue that I am trying to figure out. My company has two websites a main one at www.example.com and one at subdomain.example.com which is a subdomain of the first and is our self hosted blog. The way Google sees these with the www or no-www (called naked for now on) is that each of these actually are different when the www or naked version is used/not used in the front of the domain. I completely understand this. It is also advised that both should be set up in the Google Webmaster Tools, which I have done. Correct me if I am wrong on that in regard to having both set up. Now the way it appears is that we can set a preferred domain up in Webmaster Tools only at the root domain level. The subdomain can not have this and actually says the following "Restricted to root level domains only". So it appears that the domain should follow what the root domain says which on our preferred one says to display the www.example.com. and not the naked version. That is one issue I have in that one displays one way and the other displays another. Is it that we have the wrong redirects in place for the subdomain? Another question is does this have any affect on SEO in regards to duplicate content on the web in how we have set this up?

    Read the article

  • Want to See the Future?

    - by Oracle Staff
    You won't need this to see what's happening in September. Let the new Oracle OpenWorld 2010, JavaOne, and Oracle Develop 2010 Content Catalog be your guide. You can quickly get information on more than 2,000 sessions and 400 partner exhibitors at the September conferences. Learn about speakers, demos, preconference programs, and much more. View the Content Catalog today. To register, visit the Oracle OpenWorld 2010 , JavaOne, and Oracle Develop 2010 Websites.

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • The Minimalist Approach to Content Governance - Manage Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. Most people would probably agree that creating content is the enjoyable part of the content life cycle. Management, on the other hand, is generally not. This is why we thankfully have an opportunity to leverage meta data, security and other settings that have been applied or inherited in the prior parts of our governance process. In the interests of keeping this process pragmatic, there is little day to day activity that needs to happen here. Most of the activity that happens post creation will occur in the final "Retire" phase in which content may be archived or removed. The Manage Phase will focus on updating content and the meta data associated with it - specifically around ownership. Often times the largest issues with content ownership occur when a content creator leaves and organization or changes roles within an organization.   1. Update Content Ownership (as needed and on a quarterly basis) Why - Without updating content to reflect ownership changes it will be impossible to continue to meaningfully govern the content. How - Run reports against the meta data (creator and department to which the creator belongs) on content items for a particular user that may have left the organization or changed job roles. If the content is without and owner connect with the department marked as responsible. The content's ownership should be reassigned or if the content is no longer needed by that department for some reason it can be archived and or deleted. With a minimal investment it is possible to create reports that use an LDAP or Active Directory system to contrast all noted content owners versus the users that exist in the directory. The delta will indicate which content will need new ownership assigned. Impact - This implicitly keeps your repository and search collection clean. A very large percent of content that ends up no longer being useful to an organization falls into this category. This management phase (automated if possible) should be completed every quarter or as needed. The impact of actually following through with this phase is substantial and will provide end users with a better experience when browsing and searching for content.

    Read the article

  • Remove redundant CSS rules

    - by ken
    I have several CSS files all being used in a large application; most are legacy, with one "main" css file that is the most current. This setup came about after a rebranding, and instead of redoing all of the CSS, we simply added a new CSS file that would override the rules from the legacy files. So now, I'd like to distill all of this CSS (1000's of lines) into 1 file. NOTE: I don't want to simply combine the files (as in copy/pasting them all into 1 file); I need an application to semi-intelligently look at the rules, reduce them down to only the effective rules (after inheritance), but I can't find anything like that. Example; given: /* legacy.css */ .foo { color: red; margin: 5px; } ..and: /* current.css */ .foo { margin: 10px; } I need a tool to output: /* result.css */ .foo { color: red; margin: 10px; } Does such an application exist?

    Read the article

  • I can't load the css files

    - by mfalcon
    I want to load some .css files to my Django project but I don't know why they aren't loaded. The css files are located at "/myproject/media/css". settings.py: import os.path PROJECT_DIR = os.path.dirname(__file__) MEDIA_ROOT = os.path.join(PROJECT_DIR, 'media') urls.py: from django.conf import settings ... (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT, 'show_indexes': True}), ) base.html: <link rel="stylesheet" type="text/css" media="all" href="{{ MEDIA_ROOT }}css/myStyle.css" />

    Read the article

  • CSS three column layout, liquid center, no left-margin!

    - by moontear
    Hi, I am all in favor of CSS based layouts, but this one I just can't figure out. With a table it is oh-so-easy: <html> <head><title>Three Column</title></head> <body> <p>Test</p> <table style="width: 100%; border: 1px solid black; min-height: 300px;"> <tr> <td style="border: 1px solid green;" colspan="3">Header</td> </tr> <tr> <td style="border: 1px solid green; width: 150px;" rowspan="2">Left</td> <td style="border: 1px solid yellow;">Content</td> <td style="border: 1px solid blue; width: 200px;" rowspan="2">Right</td> </tr> <tr> <td style="border: 1px solid fuchsia;">Additional stuff</td> </tr> <tr><td style="border: 1px solid green;" colspan="3">Footer</td></tr> </body> <html> Left is fixed width Right is fixed width Content is liquid Additional stuff sits beneath content Now here is the important part: "Left" may not exist. Again this is easy with the table. Delete the column and "Content" expands. Beautiful. I have looked through many examples (and "holy grails") of liquid and table less three-column CSS based layouts, but I have not found one which is not using some kind of margin-left for the middle column ("Content"). Any margin-left will suck once "Left" is gone as "Content" will just stay at it's place. I'm just about to switch to old school table based layout for this problem, so I'm hoping someone has some idea - I don't care about excess markup, wrappers and the like, I would just like to know how to solve this with plain CSS. Btw: look at how easy equal height columns are... Cheers PS: No CSS3 please

    Read the article

  • css file not getting updated

    - by fusion
    the font of the content of my facebook app keeps getting italicized even when i've removed the italics from the css file. if i make minor changes in the css file and upload it to the server, the firebug shows the unedited previous css file and hence, the app keeps showing unformatted content. what exactly is going wrong here? i made a new css file and copied the contents of the previous css exactly as it was, and i linked it in all the files which require css. but when i upload these files to the server, facebook canvas doesn't show any css at all. i replaced the css filename with the previous one, and it works. why is this?

    Read the article

  • Django template can't see CSS files

    - by Technical Bard
    I'm building a django app and I can't get the templates to see the CSS files... My settings.py file looks like: MEDIA_ROOT = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'media') MEDIA_URL = '/media/' I've got the CSS files in /mysite/media/css/ and the template code contains: <link rel="stylesheet" type="text/css" href="/media/css/site_base.css" />` then, in the url.py file I have: # DEVELOPMENT ONLY (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/media'}), but the development server serves the plain html (without styles). What am I doing wrong?

    Read the article

  • GZipping CSS and JS files

    - by Ryan Giglio
    I'm using YSlow to improve the speed of my site, and I'm having trouble with the "compress components with gzip" grade. I have this in my .htaccess file: SetOutputFilter DEFLATE AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/x-javascript But YSlow is saying There are 4 plain text components that should be sent compressed * http://crewinyourcode.com/css/reset.css * http://crewinyourcode.com/css/inner-pages/index.css * http://crewinyourcode.com/script/css/jquery-ui-1.8.custom.css * http://crewinyourcode.com/js/inner-pages/index.js How can I gzip the css and js files? Also...I don't have access to the httpd.conf file.

    Read the article

  • CSS refuse to obey

    - by Frank
    Hi, I'll make this short, I got this webpage : <html xmlns="http://www.w3.org/1999/xhtml"> <head> <link rel="Stylesheet" type="text/css" href="css/style.css" /> <link rel="Stylesheet" type="text/css" href="css/nav.css" /> </head> <body> <div id="header"> <ul id="navList"> <li><a href="#" id="navActive">foo</a></li> <li><a href="#">bar</a></li> </ul> </div> </body> </html> With this CSS style.css body { padding:0; margin:0; background-color:#000000; background-repeat:no-repeat; background-image:url('../img/bg.jpg'); } nav.css #navList { padding:0; margin:0; background-image:url('../img/menu.png'); list-style-type:none; padding:12px 150px; } #navList li { display:inline; } #navList li a { color:#bfbfbf; padding:14px 25px; text-decoration:none; } #navList li a:hover { color:#000000; background-color:#bfbfbf; text-decoration:none; } #navActive { color:#000000; background-color:#bfbfbf; } It looks like the CSS from the navActive id is never being applied... Could someone tell me why and/or suggest me a way to correct this. Thanks.

    Read the article

  • Setting Up IRM Test Content

    - by martin.abrahams
    A feature of the 11g IRM Server that sometimes gets overlooked is the ability to set up some test content that any IRM user can access to verify that their IRM Desktop can reach the server, authenticate successfully, and render protected content successfully. Such test content is useful for new users, and in troubleshooting scenarios. Here's how to set up some test content... In the management console, go to IRM - Administration - Test Content, as shown. The console will display a list of test content - initially an empty list. Use the Add option to specify the URL of a document or image, and define one or more labels for the test content in whichever languages your users favour. Note that you do not need to seal the image or document in order to use it as test content. Nor do you need to set up any rights for the test content. The IRM Server will handle the sealing and rights assignment automatically such that all authenticated users are authorised to view the test content. Repeat this process for as many different types of content as you would like to offer for test purposes - perhaps a Word document, a PDF document, and an image. To keep things simple the first time I did this, I used the URL of one of the images in the IRM Server's UI - so there was no problem with the IRM Server being able to reach that image. Whatever content you want to use, the IRM Server needs to be able to reach it at the URL you specify. Using Test Content Open a browser and browse to the URL that the IRM Desktop normally uses to access the IRM Server, for example: http://irm11g.oracle.com/irm_desktop If you are not sure, you can find this URL in the Servers tab of the IRM Options dialog. Go to the Test tab, and you will see your test content listed. By opening one of the items, you can verify that your IRM Desktop is healthy and that you can authenticate to the IRM Server.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How do the performance characteristics of jQuery selectors differ from those of CSS selectors?

    - by Moss
    I came across Google's Page Speed add-on for Firebug yesterday. The page about using efficient CSS selectors said to not use overqualified selectors, i.e. use #foo instead of div#foo. I thought the latter would be faster but Google's saying otherwise, and who am I to go against that? So that got me wondering if the same applied to jQuery selectors. This page I found the link to on SO says I should use $("div#foo"), which is what I was doing all along, since I thought that things would speed up by limiting the selector to match div elements only. But is it really better than writing $("#foo") like Google's saying for CSS selectors, or do CSS versus jQuery element matching work in different ways and I should stick with $("div#foo")?

    Read the article

  • CSS. Placing footer at bottom of the webpage (not in the bottom of screen) in ASP Pages

    - by strakastroukas
    I have read a lot of approaches regarding placement of the footer in a webpage with CSS. Between others i found solutions, within SO too. The problem is (i think) that most of them, do not apply in asp pages. So the question is how can i place the footer with pure CSS in Asp pages? Before you post your answer, you have to take in mind the following. I use a master page (If this one has anything to do) The webpage contains the form element, which i believe destroys the placing of the footer in the bottom of the webpage. <form name="aspnetForm" method="post" action="Default.aspx" id="aspnetForm"> So, you may start down-voting, but i think a different approach exists regarding footer placement with CSS in Asp.Net pages

    Read the article

  • Change Firefox theme locally with CSS (without any extension)

    - by Email
    I have some modifications I want to make WITHOUT installing extensions. eg: max width of tab and height of menu, In previous versions you could have it done via: about:config ->browser.tabs.tab.maxWidth Since FF4 it should be done with CSS. Now I searched the default theme: C:\Users\wetcat\AppData\Roaming\Mozilla\Firefox\Profiles\feriz.default and C:\Program Files (x86)\Mozilla Firefox In the second I could find the extension - theme but without any CSS. In the AppData I couldn't find any CSS or theme folder. WHERE is the file to create or alter the CSS?? I'm using Firefox 11.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >