Search Results

Search found 3366 results on 135 pages for 'deferred rendering'.

Page 38/135 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How can I return JSON from node.js backend to frontend without reloading or re-rendering the page?

    - by poleapple
    I am working with node.js. I want to press a search button, make some rest api calls to another server in the backend and return the json back to the front end, and reload a div in the front end so that I won't have to refresh the page. Now I know I can reload just a div with jQuery or just Javascript dom manipulation. But how do I make it call a method on the server side? I could have a submit button and it will make a post request and I can catch it and make my api calls from there, however from the node.js side, when I return, I will have to render the page again. How do I go about returning JSON from the back end to the front end without re-rendering or refreshing my page? Thanks.

    Read the article

  • Why is iphone Simulator not rendering HTML5 page correctly?

    - by user364978
    hello all, I have a page I am developing in .net using HTML5 intended for a WebView in an iphone App. The page looks just fine in Safari. When I load it in the iphone Simulator it is rendering as plain text, no styles or js loading. I thought it might be an issue with .net, but seeing as it works in Safari i am stumped. When I use the XHTML doctype it works just fine in the Simulator. Any ideas why this is occurring and what the fix may be? Thanks!

    Read the article

  • UIView using Quartz rendering engine to display PDF has poor quality compared to original.

    - by Josh Kerr
    I'm using the quartz rendering engine to display a PDF file on the iphone using the 3.0 SDK. The result is a bit blurry compared to a PDF being shown in a UIWebView. How can I improve the quality in the UIView so that I don't need to rewrite my app to use the UIWebView. I'm using pretty much close to the example code that Apple provides. Here is some of my sample code: CGContextRef gc = UIGraphicsGetCurrentContext(); CGContextSaveGState(gc); CGContextTranslateCTM(gc, 0.0, rect.size.height); CGContextScaleCTM(gc, 1.0, -1.0); CGAffineTransform m = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, rect, 0, false); CGContextConcatCTM(gc, m); CGContextSetGrayFillColor(gc, 1.0, 1.0); CGContextFillRect(gc, rect); CGContextDrawPDFPage(gc, page); CGContextRestoreGState(gc); Apple's tutorial code actually results in a blurry PDF view as well. If you drop the same PDF into a UIWebView you'll see it is actually sharper. Anyone have any ideas? This one issue is holding a two year development project from launching. :(

    Read the article

  • JSF: How to get the selected item from selectOneMenu if its rendering is dynamic?

    - by Dzmitry Zhaleznichenka
    At my view I have two menus that I want to make dependent, namely, if first menu holds values "render second menu" and "don't render second menu", I want second menu to be rendered only if user selects "render second menu" option in the first menu. After second menu renders at the same page as the first one, user has to select current item from it, fill another fields and submit the form to store values in database. Both the lists of options are static, they are obtained from the database once and stay the same. My problem is I always get null as a value of the selected item from second menu. How to get a proper value? The sample code of view that holds problematic elements is: <h:selectOneMenu id="renderSecond" value="#{Bean.renderSecondValue}" valueChangeListener="#{Bean.updateDependentMenus}" immediate="true" onchange="this.form.submit();" > <f:selectItems value="#{Bean.typesOfRendering}" /> </h:selectOneMenu><br /> <h:selectOneMenu id="iWillReturnYouZeroAnyway" value="#{Bean.currentItem}" rendered="#{Bean.rendered}" > <f:selectItems value="#{Bean.items}" /> </h:selectOneMenu><br /> <h:commandButton action="#{Bean.store}" value="#Store" /> However, if I remove "rendered" attribute from the second menu, everything works properly, except for displaying the menu for all the time that I try to prevent, so I suggest the problem is in behavior of dynamic rendering. The initial value of isRendered is false because the default item in first menu is "don't render second menu". When I change value in first menu and update isRendered with valueChangeListener, the second menu displays but it doesn't initialize currentItem while submitting. Some code from my backing bean is below: public void updateDependentMenus(ValueChangeEvent value) { String newValue = (String) value.getNewValue(); if ("render second menu" == newValue){ isRendered = true; } else { isRendered = false; } } public String store(){ System.out.println(currentItem); return "stored"; }

    Read the article

  • Entire Table is pushed to the next page when rendering a SSRS 2005 Report (as .pdf) in SSRS 2008

    - by Pwninstein
    I have a SSRS 2005 report that I'm rendering in SSRS 2008 as a .pdf. The report contains (among other things) a table that's very simple: header row, details, no footer, no aggregation, no grouping, keep together = false, pageBreakAtStart = false, pageBreakAtEnd = false, repeatHeaderOnNewPage = true. I resized the table to be much narrower than the body of the report just to be sure it wasn't extending beyond the bounds of the report, pushing everything down. But, no matter what I try, if some of the detail rows in that table would need to be pushed to the next page, then the ENTIRE TABLE is pushed to the next page, not just the extra rows. So my question is: Is there a workaround for this problem, is this a known issue, or is it even possible to get this 2005 report to render properly in 2008? NOTE: this is related to a question that I previously asked here, and is based on this MSDN forum post started by a coworker. This question is not the same as my previous question, as I'd like to see things work properly in with a 2005 report. If it's not possible, that would be good to know, as it would indicate that we need to upgrade one of our servers to SQL 2008. Thanks!

    Read the article

  • Using of Templated Helpers in MVC 2.0 : How can use the name of the property that I'm rendering insi

    - by Andrey Tagaew
    Hi. I'm reviewing new features of ASP.NET MVC 2.0. During the review i found really interesting using Templated Helpers. As they described it, the primary reason of using them is to provide common way of how some datatypes should be rendered. Now i want to use this way in my project for DateTime datatype My project was written for the MVC 1.0 so generating of editbox is looking like this: <%= Html.TextBox("BirthDate", Model.BirthDate, new { maxlength = 10, size = 10, @class = "BirthDate-date" })%> <script type="text/javascript"> $(document).ready(function() { $(".BirthDate-date").datepicker({ showOn: 'button', buttonImage: '<%=Url.Content("~/images/i_calendar.gif") %>', buttonImageOnly: true }); }); </script> Now i want to use Template Helper, so i want to have above code once i type next sentence: <%=Html.EditorFor(f=>f.BirthDate) %> According to the manual I create DataTime.ascx partial view inside Shared/EditorTemplates folder. I put there above code and stacked with the problem. How can i pass the name of the property that I'm rendering with template helper? As you can see from my example, i really need it, since I'm using the name of the property to specify data value and parameter name that will be send during the POST requsest. Also, I'm using it to generate class name for JS calendar building. I tried to remove my partial class for template helper and made MVC to generate its default behavior. Here what it generated for me: <input type="text" value="04/29/2010" name="LoanApplicationDays" id="LoanApplicationDays" class="text-box single-line"> As you can see, it used the name of the property for "name" and "id" attributes. This example let me to presume that Template Helper knows about the name of the property. So, there should be some way of how to use it in custom implementation. Thanks for your help!

    Read the article

  • Is there any possibility of rendering differences betweeen firefox 3 and 3.5 and IE 7 and 8, even if

    - by metal-gear-solid
    I'm making a site for European client and he said Firefox 3 and IE 7 and 8 has more user than others browser for desktop in Europe http://gs.statcounter.com/#browser_version-eu-monthly-200812-201001-bar I've only IE 7 and Firefox 3.5.7 installed in my PC. Should I download portable Firefox 3.0 and test in it too even if I'm not using any new css property/selector which only has support in Firefox 3.5 or testing in 3.5.7 would be enough? And for IE testing in IE 7 would be enough or should i check my site in IE8 (downloading VPC image of IE8 and testing in VM) even if I'm not using any new css property/selector which only has support in IE8? Or is it necessary to use <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" /> in <head> ? But what will happen when user will switch compatibility mode to IE 8 default rendering mode? Can we make site compatible in IE 7 and 8 both without using <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />? If yes, then what special we need to do.care/consider in css to make site identical in both?

    Read the article

  • SDL2 sprite batching and texture atlases

    - by jms
    I have been programming a 2D game in C++, using the SDL2 graphics API for rendering. My game concept currently features effects that could result in even tens of thousands of sprites being drawn simultaneously to the screen. I'd like to know what can be done for increasing rendering efficiency if the need arises, preferably using the SDL2 API only. I have previously given a quick look at OpenGL-based 2D rendering, and noticed that SDL2 lacks a command like int SDL_RenderCopyMulti(SDL_Renderer* renderer, SDL_Texture* texture, const SDL_Rect* srcrects, SDL_Rect* dstrects, int count) Which would permit SDL to benefit from two common techniques used for efficient 2D graphics: Texture batching: Sorting sprites by the texture used, and then simultaneously rendering as many sprites that use the same texture as possible, changing only the source area on the texture and the destination area on the render target between sprites. This allows the encapsulation of the whole operation in a single GPU command, reducing the overhead drastically from multiple distinct calls. Texture atlases: Instead of creating one texture for each frame of each animation of each sprite, combining multiple animations and even multiple sprites into a single large texture. This lessens the impact of changing the current texture when switching between sprites, as the correct texture is often ready to be used from the previous draw call. Furthemore the GPU is optimized for handling large textures, in contrast to the many tiny textures typically used for sprites. My question: Would SDL2 still get somewhat faster from any rudimentary sprite sorting or from combining multiple images into one texture thanks to automatic video driver optimizations? If I will encounter performance issues related to 2D rendering in the future, will I be forced to switch to OpenGL for lower level control over the GPU? Edit: Are there any plans to include such functionality in the near future?

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Should I include HTML markup in my JSON response?

    - by Mike M. Lin
    In an e-commerce site, when adding an item to a cart, I'd like to show a popup window with the options you can choose. Imagine you're ordering an iPod Shuffle and now you have to choose the color and text to engrave. I'd like the window to be modal, so I'm using a lightbox populated by an Ajax call. Now I have two options: Option 1: Send only the data, and generate the HTML markup using JavaScript What's nice about this is that it trims down the Ajax request to the bear minimum and doesn't mix the data with the markup. What's not so great about this is that now I need to use JavaScript to do my rendering, instead of having a template engine on the server-side do it. I might be able to clean up the approach a bit by using a client-side templating solution. Option 2: Send the HTML markup What's good about this is that I can have the same server-side templating engine I'm using for the rest of my rendering tasks (Django), do the rendering of the lightbox. JavaScript is only used to insert the HTML fragment into the page. So it clearly leaves the rendering to the rendering engine. Makes sense to me. But I don't feel comfortable mixing data and markup in an Ajax call for some reason. I'm not sure what makes me feel uneasy about it. I mean, it's the same way every web page is served up -- data plus markup -- right?

    Read the article

  • Why does a bash-zenity script has that title on Unity Panel and that icon on Unity Launcher?

    - by Sadi
    I have this small bash script which helps use Infinality font rendering options via a more user-friendly Zenity window. But whenever I launch it I have this "Color Picker" title on Unity Panel together with the icon assigned for "Color Picker" utility. I wonder why and how this is happening and how I can change it? #!/bin/bash # A simple script to provide a basic, zenity-based GUI to change Infinality Style. # v.1.2 # infinality_current=`cat /etc/profile.d/infinality-settings.sh | grep "USE_STYLE=" | awk -F'"' '{print $2}'` sudo_password="$( gksudo --print-pass --message 'Provide permission to make system changes: Enter your password to start or press Cancel to quit.' -- : 2>/dev/null )" # Check for null entry or cancellation. if [[ ${?} != 0 || -z ${sudo_password} ]] then # Add a zenity message here if you want. exit 4 fi # Check that the password is valid. if ! sudo -kSp '' [ 1 ] <<<"${sudo_password}" 2>/dev/null then # Add a zenity message here if you want. exit 4 fi # menu(){ im="zenity --width=500 --height=490 --list --radiolist --title=\"Change Infinality Style\" --text=\"Current <i>Infinality Style</i> is\: <b>$infinality_current</b>\n? To <i>change</i> it, select any other option below and press <b>OK</b>\n? To <i>quit without changing</i>, press <b>Cancel</b>\" " im=$im" --column=\" \" --column \"Options\" --column \"Description\" " im=$im"FALSE \"DEFAULT\" \"Use default settings - a compromise that should please most people\" " im=$im"FALSE \"OSX\" \"Simulate OSX rendering\" " im=$im"FALSE \"IPAD\" \"Simulate iPad rendering\" " im=$im"FALSE \"UBUNTU\" \"Simulate Ubuntu rendering\" " im=$im"FALSE \"LINUX\" \"Generic Linux style - no snapping or certain other tweaks\" " im=$im"FALSE \"WINDOWS\" \"Simulate Windows rendering\" " im=$im"FALSE \"WIN7\" \"Simulate Windows 7 rendering with normal glyphs\" " im=$im"FALSE \"WINLIGHT\" \"Simulate Windows 7 rendering with lighter glyphs\" " im=$im"FALSE \"VANILLA\" \"Just subpixel hinting\" " im=$im"FALSE \"CLASSIC\" \"Infinality rendering circa 2010 - No snapping.\" " im=$im"FALSE \"NUDGE\" \"Infinality - Classic with lightly stem snapping and tweaks\" " im=$im"FALSE \"PUSH\" \"Infinality - Classic with medium stem snapping and tweaks\" " im=$im"FALSE \"SHOVE\" \"Infinality - Full stem snapping and tweaks without sharpening\" " im=$im"FALSE \"SHARPENED\" \"Infinality - Full stem snapping, tweaks, and Windows-style sharpening\" " im=$im"FALSE \"INFINALITY\" \"Infinality - Standard\" " im=$im"FALSE \"DISABLED\" \"Act without extra infinality enhancements - just subpixel hinting\" " } # option(){ choice=`echo $im | sh -` # if echo $choice | grep "DEFAULT" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"DEFAULT\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "OSX" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"OSX\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "IPAD" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"IPAD\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "UBUNTU" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"UBUNTU\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "LINUX" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"LINUX\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WINDOWS" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WIN7" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS7\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WINLIGHT" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS7LIGHT\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "VANILLA" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"VANILLA\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "CLASSIC" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"CLASSIC\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "NUDGE" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"NUDGE\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "PUSH" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"PUSH\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "SHOVE" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"SHOVE\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "SHARPENED" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"SHARPENED\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "INFINALITY" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"INFINALITY\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "DISABLED" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"DISABLED\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # } # menu option # if test ${#choice} -gt 0; then echo "Operation completed" fi # exit 0

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • Why is Safari on my computer rendering all of the colors -- not just for images -- incorrectly?

    - by richardhenry
    I’m not just talking about image color profile issues; every single color that the browser renders is incorrect. It’s like it’s in it’s own color space (or something!). Screenshot: http://drp.ly/DJk1O (Opera, Safari, Chrome, Firefox) Spot the odd one out? Open this up in Photoshop or similar and try using the eyedropper to select the colour. Safari renders the same hex color completely differently. That color is set using a background-color declaration in CSS, so it should be identical in all four of those browsers. Here’s the HTML I was using: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>untitled</title> <style type="text/css"> body { background-color: #114742; } </style> </head> <body> </body> </html> Literally every website I’m viewing with my install of Safari is displaying colors incorrectly. The blue of the bar on Facebook is slightly less rich. This doesn’t occur on any other Macs I’ve tried. Any idea what’s happened to my Safari install?

    Read the article

  • PDF rendering of images seems to vary from viewer to viewer with blurring?

    - by AndyL
    I'm generating PDF figures in Adobe Illustrator CS5 that include embedded images. I've noticed that the images look dramatically different when I display the same PDF in Preview, Skim or Adobe Reader (I'm on OS X). See screenshots. Adobe Reader displays them "correctly" while Skim and Preview blurs the image out each in a different way. Is there a setting I can set when saving my PDF from Illustrator so that the images are displayed correctly in Skim and Preview? The PDF was generated in Illustrator and saved without any compression or downsampling. The original PDF is here: http://ge.tt/8iZMR2A Adobe Reader 9 Skim 1.3.18 Preview 4.2 Super User's client-side PDF renderer

    Read the article

  • MVC and individual elements of the model under a common base class

    - by Stewart
    Admittedly my experience of using the MVC pattern is limited. It might be argued that I don't really separate the V from the C, though I keep the M separate from the VC to the extent I can manage. I'm considering the scenario in which the application's model includes a number of elements that have a common base class. For example, enemy characters in a video game, or shape types in a vector graphics app. The view wants to render these elements. Of course, the different subclasses call for different rendering. The problem is that the elements are part of the model. Rendering them is conceptually part of the view. But how they are to be rendered depends on parameters of both: Attributes and state of the element are parameters of the model User settings are parameters of the view - and to support multiple platforms and/or view modes, different views may be used What's your preferred way of dealing with this? Put the rendering code in the model classes, passing in any view parameters? Put the rendering code in the view, using a switch or similar to select the right rendering for the model element type? Have some intermediate classes as a model-view interface, of which the model will create objects on demand and the view will then render them? Something else?

    Read the article

  • How does a winkydink Teradici offer high res, full FPS, 3D rendering on ESXi 5 VDIs for AutoCAD/SolidWorks/1080p YouTube applications?

    - by BlueToast
    How does such a small Teradici card ![enter image description here][1] offer high resolution, full FPS 3D graphics (1:38) http://www.youtube.com/watch?v=eXA4QMmfY5Y&feature=player_detailpage#t=97s for ESXi 5.0/5.1 VDI environments? We're shooting for an AutoCAD/SolidWorks/YouTube 1080p capable environment. I can't see how such a small and low profile card could possibly have the horsepower to handle such GPU computations for a big environment like that. We're going to have up to 64 VDIs per server, and are a 500-1000 employee count sized company. Someone enlighten me please! Determining which route to go (between RemoteFX and VMware View/PCoIP) and the hardware (NVIDIA 4GB non-Quadro/Tesla GPUs vs Teradici card). Servers have three 4x, three 8x, and one 16x PCI-E lane. Two of the 8x lanes will be occupied by SAS RAID cards.

    Read the article

  • How can I improve the rendering performance of this old DOS application?

    - by MicTech
    I have very old DOS Application (CadSoft Eagle - PCB Designer) and I want to work with it on my workstation with Windows 7. Then I install Windows 98 and that software into VmWare Player. But that software has serious problem with redrawing screen. It's very slow in comparison with my Intel Celeron 333MHz with Windows 98. I have same problem if I try to use DOSBox on Windows XP (same Celeron 333MHz). I also trying run this application directly on Windows XP (same Celeron 333MHz) with compatibility mod set to "Windows 98", but I get "(0Dh): General Protection Fault". Can someone give me good advice how I solve that?

    Read the article

  • Getting a null reference exception exporting a slightly complex rdlc report to excel

    - by George Handlin
    Have an issue in exporting a slightly complex report to excel from an rdlc. Exporting a simple tabular report works fine, but getting a null reference exception with a more complex one. As is usually the case, it works on the server in the development environment, but not in the test environment. The server in the development environment has iis, sql and sql reporting services on it and test only has iis (I didn't set them up). I'm guessing there is something that gets installed with SSRS that is not installed with just the report viewer. The report has several parameters going into it and a few generic lists going in for datasources. Some tables for display as well as a list. Working with the 2005 version. Exception follows: [NullReferenceException: Object reference not set to an instance of an object.] Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderPageLayout(PageLayout pageLayout, Int32& currentPageNumber, Stack& stack) +91 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderPageCollection(PageCollection pageCollection, Int32& currentPageNumber, Stack& stack, PageLayout& lastPageLayout) +607 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.GenerateWorkSheets() +150 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.GenerateMainSheet() +358 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.RenderExcelWorkBook(CreateAndRegisterStream createAndRegisterStream) +158 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.ProcessReport(CreateAndRegisterStream createAndRegisterStream) +385 Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.Render(Report report, NameValueCollection reportServerParameters, NameValueCollection deviceInfo, NameValueCollection clientCapabilities, EvaluateHeaderFooterExpressions evaluateHeaderFooterExpressions, CreateAndRegisterStream createAndRegisterStream) +230 [ReportRenderingException: An error occurred during rendering of the report.] Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.Render(Report report, NameValueCollection reportServerParameters, NameValueCollection deviceInfo, NameValueCollection clientCapabilities, EvaluateHeaderFooterExpressions evaluateHeaderFooterExpressions, CreateAndRegisterStream createAndRegisterStream) +296 Microsoft.ReportingServices.ReportProcessing.ReportProcessing.RenderSnapshot(IRenderingExtension renderer, CreateReportChunk createChunkCallback, RenderingContext rc, GetResource getResourceCallback) +1124 [WrapperReportRenderingException: An error occurred during rendering of the report.] Microsoft.ReportingServices.ReportProcessing.ReportProcessing.RenderSnapshot(IRenderingExtension renderer, CreateReportChunk createChunkCallback, RenderingContext rc, GetResource getResourceCallback) +1681 Microsoft.Reporting.LocalService.RenderWithDataCache(PreviewItemContext itemContext, ParameterInfoCollection reportParameters, IEnumerable dataSources, DatasourceCredentialsCollection credentials, IRenderingExtension renderer, ReportProcessing repProc, CreateAndRegisterStream createStreamCallback, ReportRuntimeSetup runtimeSetup) +1693 Microsoft.Reporting.LocalService.Render(PreviewItemContext itemContext, Boolean allowInternalRenderers, ParameterInfoCollection reportParameters, IEnumerable dataSources, DatasourceCredentialsCollection credentials, CreateAndRegisterStream createStreamCallback, ReportRuntimeSetup runtimeSetup, ProcessingMessageList& warnings) +227 Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings) +235 [LocalProcessingException: An error occurred during local report processing.] Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings) +276 Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings) +189 Microsoft.Reporting.WebForms.LocalReport.Render(String format, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings) +28 Microsoft.Reporting.WebForms.LocalReportControlSource.RenderReport(String format, String deviceInfo, NameValueCollection additionalParams, String& mimeType, String& fileNameExtension) +52 Microsoft.Reporting.WebForms.ExportOperation.PerformOperation(NameValueCollection urlQuery, HttpResponse response) +153 Microsoft.Reporting.WebForms.HttpHandler.ProcessRequest(HttpContext context) +202 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +303 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +64

    Read the article

  • How do I work around an HTML Table rendering bug in IE 7?

    - by osmaniac
    I have a table. Some cells span multiple columns. Some cells span multiple rows and columns. But one row (which spans all columns but the rightmost one) creates an artifact. Part of the text in the cell is erroneously repeated left justified on a new row just below the table. I'm baffled. I tried rendering with and without "table-layout: fixed;". Same result. When I originally composed the design using just HTML and CSS, it looked great. But then I worked it into a page and had to add more columns to my master table the right to hold buttons. These buttons are in three groups, each having their own div to control floating and rewrapping when the window gets narrower. One div has another table inside it that groups a single row of buttons. Thus I have table inside div inside td inside outer table. I would prefer a simpler design, but how? This is what I want to have: ................................................................................... . . . . Four rows of data . Three groups of buttons that can reflow . . With several columns . if window gets narrower . . meticulously layed out, . . . That should not resize . . . when window gets narrower . . ................................................................................... . One more row of data spanning the whole screen which stays below the buttons . ................................................................................... What I was doing was putting the three divs with the buttons in the upper right in a single cell that spanned four rows. What other opportunities does CSS offer? The buttons are not allowed to overlap the data on the left or go past the data line below. The original design had the divs with the buttons NOT in a table with the data, but when the window gets narrow, some of the buttons flow such that they go underneath the data, which looks bad. I would post actual HTML, except it is generated by ASP, huge, with lots of CSS styling, and the feature that lets me view the final HTML is not working at the moment. (Built in security in the application.)

    Read the article

  • IE6 rendering bug. Some parsed <li> elements are losing their closing tags.

    - by Jeff Fohl
    I have been working with IE6 for many years [sob], but have never seen this particular bug before, and I can't seem to find a reference to it on the Web. The problem appears to be with how IE6 is parsing the HTML of a nested list. Even though the markup is correct, IE6 somehow munges the code when it is parsed, and drops the closing tags of some of the <li> elements. For example, take the following code: <!DOCTYPE html> <head> <title>My Page</title> </head> <body> <div> <ul> <li><a href=''>Child A</a> <div> <ul> <li><a href=''>Grandchild A</a></li> </ul> </div> </li> <li><a href=''>The Child B Which Is Not A</a> <div> <ul> <li><a href=''>Grandchild B</a></li> <li><a href=''>Grandchild C</a></li> </ul> </div> </li> <li><a href=''>Deep Purple</a></li> <li><a href=''>Led Zeppelin</a></li> </ul> </div> </body> </html> Now take a look at how IE6 renders this code, after it has run it through the IE6 rendering engine: <HTML> <HEAD> <TITLE>My Page</TITLE></HEAD> <BODY> <DIV> <UL> <LI><A href="">Child A</A> <DIV> <UL> <LI><A href="">Grandchild A</A> </LI> </UL> </DIV> <LI><A href="">The Child B Which Is Not A</A> <DIV> <UL> <LI><A href="">Grandchild B</A> <LI><A href="">Grandchild C</A> </LI> </UL> </DIV> <LI><A href="">Deep Purple</A> <LI><A href="">Led Zeppelin</A> </LI> </UL> </DIV> </BODY> </HTML> Note how on some of the <li> elements there are no longer any closing tags, even though it existed in the source HTML. Does anyone have any idea what could be triggering this bug, and if it is possible to avoid it? It seems to be the source of some visual display problems in IE6. Many thanks for any advice.

    Read the article

  • Understanding "this" keyword

    - by Raffaele
    In this commit there is a change I cannot explain deferred.done.apply( deferred, arguments ).fail.apply( deferred, arguments ); becomes deferred.done( arguments ).fail( arguments ); AFAIK, when you invoke a function as a member of some object like obj.func(), inside the function this is bound to obj, so there would be no use invoking a function through apply() just to bound this to obj. Instead, according to the comments, this was required because of some preceding $.Callbacks.add implementation. My doubt is not about jQuery, but about the Javascript language itself: when you invoke a function like obj.func(), how can it be that inside func() the this keyword is not bound to obj?

    Read the article

  • Why is this PHP loop rendering every row twice?

    - by Christopher
    I'm working on a real frankensite here not of my own design. There's a rudimentary CMS and one of the pages shows customer records from a MySQL DB. For some reason, it has no probs picking up the data from the DB - there's no duplicate records - but it renders each row twice. <?php $limit = 500; $area = 'customers_list'; $prc = 'customer_list.php'; if($_GET['page']) { include('inc/functions.php'); $page = $_GET['page']; } else { $page = 1; } $limitvalue = $page * $limit - ($limit); $customers_check = get_customers(); $customers = get_customers($limitvalue, $limit); $totalrows = count($customers_check); ?> <!-- pid: customer_list --> <table border="0" width="100%" cellpadding="0" cellspacing="0" style="float: left; margin-bottom: 20px;"> <tr> <td class="col_title" width="200">Name</td> <td></td> <td class="col_title" width="200">Town/City</td> <td></td> <td class="col_title">Telephone</td> <td></td> </tr> <?php for ($i = 0; $i < count($customers); $i++) { ?> <tr> <td colspan="2" class="cus_col_1"><a href="customer_details.php?id=<?php echo $customers[$i]['customer_id']; ?>"><?php echo $customers[$i]['surname'].', '.$customers[$i]['first_name']; ?></a></td> <td colspan="2" class="cus_col_2"><?php echo $customers[$i]['town']; ?></td> <td class="cus_col_1"><?php echo $customers[$i]['telephone']; ?></td> <td class="cus_col_2"> <a href="javascript: single_execute('prc/customers.prc.php?delete=yes&id=<?php echo $customers[$i]['customer_id']; ?>')" onClick="return confirmdel();" class="btn_maroon_small" style="margin: 0px; float: right; margin-right: 10px;"><div class="btn_maroon_small_left"> <div class="btn_maroon_small_right">Delete Account</div> </div></a> <a href="customer_edit.php?id=<?php echo $customers[$i]['customer_id']; ?>" class="btn_black" style="margin: 0px; float: right; margin-right: 10px;"><div class="btn_black_left"> <div class="btn_black_right">Edit Account</div> </div></a> <a href="mailto: <?php echo $customers[$i]['email']; ?>" class="btn_black" style="margin: 0px; float: right; margin-right: 10px;"><div class="btn_black_left"> <div class="btn_black_right">Email Customer</div> </div></a> </td> </tr> <tr><td class="col_divider" colspan="6"></td></tr> <?php }; ?> </table> <!--///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////--> <!--// PAGINATION--> <!--///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////--> <div class="pagination_holder"> <?php if($page != 1) { $pageprev = $page-1; ?> <a href="javascript: change('<?php echo $area; ?>', '<?php echo $prc; ?>?page=<?php echo $pageprev; ?>');" class="pagination_left">Previous</a> <?php } else { ?> <div class="pagination_left, page_grey">Previous</div> <?php } ?> <div class="pagination_middle"> <?php $numofpages = $totalrows / $limit; for($i = 1; $i <= $numofpages; $i++) { if($i == $page) { ?> <div class="page_number_selected"><?php echo $i; ?></div> <?php } else { ?> <a href="javascript: change('<?php echo $area; ?>', '<?php echo $prc; ?>?page=<?php echo $i; ?>');" class="page_number"><?php echo $i; ?></a> <?php } } if(($totalrows % $limit) != 0) { if($i == $page) { ?> <div class="page_number_selected"><?php echo $i; ?></div> <?php } else { ?> <a href="javascript: change('<?php echo $area; ?>', '<?php echo $prc; ?>?page=<?php echo $i; ?>');" class="page_number"><?php echo $i; ?></a> <?php } } ?> </div> <?php if(($totalrows - ($limit * $page)) > 0) { $pagenext = $page+1; ?> <a href="javascript: change('<?php echo $area; ?>', '<?php echo $prc; ?>?page=<?php echo $pagenext; ?>');" class="pagination_right">Next</a> <?php } else { ?> <div class="pagination_right, page_grey">Next</div> <?php } ?> </div> <!--///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////--> <!--// END PAGINATION--> <!--///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////--> I'm not the world's best PHP expert but I think I can see an error in a for loop when there is one... But everything looks ok to me. You'll notice that the customer name is clickable; clicking takes you to another page where you can view their full info as held in the DB - and for both rows, the customer ID is identical, and manually checking the DB shows there's no duplicate entries. The code is definitely rendering each row twice, but for what reason I have no idea. All pointers / advice appreciated.

    Read the article

  • Another design-related C++ question

    - by Kotti
    Hi! I am trying to find some optimal solutions in C++ coding patterns, and this is one of my game engine - related questions. Take a look at the game object declaration (I removed almost everything, that has no connection with the question). // Abstract representation of a game object class Object : public Entity, IRenderable, ISerializable { // Object parameters // Other not really important stuff public: // @note Rendering template will never change while // the object 'lives' Object(RenderTemplate& render_template, /* params */) : /*...*/ { } private: // Object rendering template RenderTemplate render_template; public: /** * Default object render method * Draws rendering template data at (X, Y) with (Width, Height) dimensions * * @note If no appropriate rendering method overload is specified * for any derived class, this method is called * * @param Backend & b * @return void * @see */ virtual void Render(Backend& backend) const { // Render sprite from object's // rendering template structure backend.RenderFromTemplate( render_template, x, y, width, height ); } }; Here is also the IRenderable interface declaration: // Objects that can be rendered interface IRenderable { /** * Abstract method to render current object * * @param Backend & b * @return void * @see */ virtual void Render(Backend& b) const = 0; } and a sample of a real object that is derived from Object (with severe simplifications :) // Ball object class Ball : public Object { // Ball params public: virtual void Render(Backend& b) const { b.RenderEllipse(/*params*/); } }; What I wanted to get is the ability to have some sort of standard function, that would draw sprite for an object (this is Object::Render) if there is no appropriate overload. So, one can have objects without Render(...) method, and if you try to render them, this default sprite-rendering stuff is invoked. And, one can have specialized objects, that define their own way of being rendered. I think, this way of doing things is quite good, but what I can't figure out - is there any way to split the objects' "normal" methods (like Resize(...) or Rotate(...)) implementation from their rendering implementation? Because if everything is done the way described earlier, a common .cpp file, that implements any type of object would generally mix the Resize(...), etc methods implementation and this virtual Render(...) method and this seems to be a mess. I actually want to have rendering procedures for the objects in one place and their "logic implementation" - in another. Is there a way this can be done (maybe alternative pattern or trick or hint) or this is where all this polymorphic and virtual stuff sucks in terms of code placement?

    Read the article

  • WinForms TabControl: How to avoid tab-rendering on DrawMode=OwnerDrawFixed?

    - by splattne
    I extended the (WindowsForms) built-in TabControl in order to implement closing tabs right on the tab itself ("x" image on the right like in Webbrowsers). The inherited control renders the text and images. Also, it uses visual styles on hover etc. All works very well, but I have one problem I can't solve. When the tabs are rendered, I cannot avoid that the Control paints the tabs another time as seen in this screenshot: That's a problem for two reasons: It looks ugly on the selected tab I'd like to increase the tabs' width because the current width doesn't take account of the "x" image on the right Any idea how to solve this?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >