Search Results

Search found 664 results on 27 pages for 'reducing'.

Page 20/27 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • jQuery: select checkbox when upload field is clicked

    - by Arne Stephensson
    I'm working on a CMS. When a user wants to replace an existing thumbnail image, they can upload a replacement, and check a checkbox to certify that they are indeed replacing the image. The checkbox seems necessary because otherwise the form submits a blank value (due to the presence of the upload field with no value) and wipes out the existing image without a replacement image being specified. The problem is that I keep forgetting to manually activate the checkbox and the uploaded image does not actually replace the existing one because without the checkbox being clicked, the receiving PHP file does not (by design) process the incoming filename. I suspect that users will have the same problem and I want to keep this from happening by reducing the number of steps required. One solution would be to check the checkbox with jQuery whenever the upload field is clicked. So why doesn't this work: $('#upload_field').click(function () { if ($('#replace_image').attr('checked',true)) { $('#replace_image').removeAttr('checked'); alert('Unchecked'); } else if ($('#replace_image').attr('checked',false)) { $('#replace_image').attr('checked','checked'); alert('Checked'); } });

    Read the article

  • TicTacToe strategic reduction

    - by NickLarsen
    I decided to write a small program that solves TicTacToe in order to try out the effect of some pruning techniques on a trivial game. The full game tree using minimax to solve it only ends up with 549,946 possible games. With alpha-beta pruning, the number of states required to evaluate was reduced to 18,297. Then I applied a transposition table that brings the number down to 2,592. Now I want to see how low that number can go. The next enhancement I want to apply is a strategic reduction. The basic idea is to combine states that have equivalent strategic value. For instance, on the first move, if X plays first, there is nothing strategically different (assuming your opponent plays optimally) about choosing one corner instead of another. In the same situation, the same is true of the center of the walls of the board, and the center is also significant. By reducing to significant states only, you end up with only 3 states for evaluation on the first move instead of 9. This technique should be very useful since it prunes states near the top of the game tree. This idea came from the GameShrink method created by a group at CMU, only I am trying to avoid writing the general form, and just doing what is needed to apply the technique to TicTacToe. In order to achieve this, I modified my hash function (for the transposition table) to enumerate all strategically equivalent positions (using rotation and flipping functions), and to only return the lowest of the values for each board. Unfortunately now my program thinks X can force a win in 5 moves from an empty board when going first. After a long debugging session, it became apparent to me the program was always returning the move for the lowest strategically significant move (I store the last move in the transposition table as part of my state). Is there a better way I can go about adding this feature, or a simple method for determining the correct move applicable to the current situation with what I have already done?

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • Word forms with too many ActiveX checkboxes load slowly.

    - by Luke
    Hi there, my company's software product has a feature that allows users to generate forms from Word templates. The program auto fills some fields from the SQL database and the user can fill in other data that they desire. So we have a .dotx template that holds the design of the form, and then the user gets the .docx file to fill out when they call it from our program. The problem we're having is that some of our users have been finding that the forms take an exceptionally long time to open up and then, once open, are so slow to respond (scroll around, etc) that they're unusable. So in my investigations so far, I've found out that the problem systems are one with lower powered CPUs (unfortunately it happens for systems above our system requirements) and the Word forms that cause the problems are ones with large amount of ActiveX style checkboxes on them. I verified that reducing the ActiveX checkboxes fixes the form loading problems. So I have the following questions about solutions (we're using Word 2007): 1) Is there any way to configure Word, or some other settings, so that there won't be such a strain opening a Word form with lots of ActiveX checkboxes? Any way of speeding up Word's opening? 2) Using Legacy style checkboxes instead of the ActiveX ones makes the forms load fine, but it looks like the user has to double-click the checkbox and change Default Value-Checked. Is there a way to configure it so that they can simply click on the checkbox to tick it? "Legacy Forms" checkbox as a name kind of worries me (Legacy…), does that mean a future version of word at some point wouldn't load the checkboxes because they're "legacy"? 3) Yes, it became clear to me after a little bit of research into solutions that Word is not the tool for the job for forms like I'm describing. InfoPath seems to be exactly what we should have been using all along but unfortunately I wasn't involved in the decision making or development of these forms, just tasked with coming up with a solution. I'd appreciate answers to any of these, or if anyone has any other ideas for solutions to this problem. Thanks

    Read the article

  • What different terms mean the same thing (or don't, but people think they do)?

    - by Matthew Jones
    One of the pitfalls I run into on a daily basis is customers saying one thing while meaning another. Usually, this is just due to a miscommunication somewhere, but occasionally they are, in fact, saying the same thing I am just using a different term. For example, one of my customers the other day mentioned a feature he called, "find as you type." Being a little confused, I asked him what he meant, and he described the feature in Google where, once you start typing a search query, Google suggests other, popular queries that match the letters you have typed. Click! He meant AutoComplete! He was not wrong, it is just that I had never heard that term before. In the spirit of reducing confusion, what terms can you think of that are different but mean, essentially, the same thing? Also, what terms do people think mean the same thing, but don't. Please differentiate between the two. Please only one set of terms per answer, so we can vote on the best ones.

    Read the article

  • Idiomatic PHP web page creation

    - by GreenMatt
    My PHP experience is rather limited. I've just inherited some stuff that looks odd to me, and I'd like to know if this is a standard way to do things. The page which shows up in the browser location (e.g. www.example.com/example_page) has something like: <? $title = "Page Title"; $meta = "Some metadata"; require("pageheader.inc"); ?> <!-- Content --> Then pageheader.inc has stuff like: <? @$title = ($title) ? $title : ""; @$meta = ($meta) ? $meta : ""; ?> <html> <head> <title><?=$title?></title </head> <!-- and so forth --> Maybe others find this style useful, but it confuses me. I suppose this could be a step toward a rudimentary content management system, but the way it works here I'd think it adds to the processing the server has to do without reducing the load on the web developer enough to make it worth the effort. So, is this a normal way to create pages with PHP? Or should I pull all this in favor of a better approach? Also, I know that "<?" (vs. "<?php" ) is undesirable; I'm just reproducing what is in the code.

    Read the article

  • How do I stop a bouncy JQuery animation?

    - by Miguel
    In a webapp I'm working on, I want to create some slider divs that will move up and down with mouseover & mouseout (respectively.) I currently have it implemented with JQuery's hover() function, by using animate() and reducing/increasing it's top css value as needed. This works fairly well, actually. The problem is that it tends to get stuck. If you move the mouse over it (especially near the bottom), and quickly remove it, it will slide up & down continuously and won't stop until it's completed 3-5 cycles. To me, it seems that the issue might have to do with one animation starting before another is done (e.g. the two are trying to run, so they slide back and forth.) Okay, now for the code. Here's the basic JQuery that I'm using: $('.slider').hover( /* mouseover */ function(){ $(this).animate({ top : '-=120' }, 300); }, /* mouseout*/ function(){ $(this).animate({ top : '+=120' }, 300); } ); I've also recreated the behavior in a JSFiddle. Any ideas on what's going on? :) ==EDIT== UPDATED JSFiddle

    Read the article

  • unexpected behaviour of object stored in web service Session

    - by draconis
    Hi. I'm using Session variables inside a web service to maintain state between successive method calls by an external application called QBWC. I set this up by decorating my web service methods with this attribute: [WebMethod(EnableSession = true)] I'm using the Session variable to store an instance of a custom object called QueueManager. The QueueManager has a property called ChangeQueue which looks like this: [Serializable] public class QueueManager { ... public Queue<QBChange> ChangeQueue { get; set; } ... where QBChange is a custom business object belonging to my web service. Now, every time I get a call to a method in my web service, I use this code to retrieve my QueueManager object and access my queue: QueueManager qm = (QueueManager)Session[ticket]; then I remove an object from the queue, using qm.dequeue() and then I save the modified query manager object (modified because it contains one less object in the queue) back to the Session variable, like so: Session[ticket] = qm; ready for the next web service method call using the same ticket. Now here's the thing: if I comment out this last line //Session[ticket] = qm; , then the web service behaves exactly the same way, reducing the size of the queue between method calls. Now why is that? The web service seems to be updating a class contained in serialized form in a Session variable without being asked to. Why would it do that? When I deserialize my Queuemanager object, does the qm variable hold a reference to the serialized object inside the Session[ticket] variable?? This seems very unlikely.

    Read the article

  • MATLAB: svds() not converging

    - by Paul
    So using MATLAB's svds() function on some input data as such: [U, S, V, flag] = svds(data, nSVDs, 'L') I noticed that from run to run with the same data, I'd get drastically different output SVD sizes from run to run. When I checked whether 'flag' was set, I found that it was, indicating that the SVDs had not converged. My normal system here would be that if it really needs to converge, I'd do something like this: flag = 1 svdOpts = struct('tol', 1e-10, 'maxit', 600, 'disp', 0); while flag: if svdOpts.maxit > 1e6 error('There''s a real problem here.') end [U, S, V, flag] = svds(data, nSVDs, 'L', svdOpts) svdOpts.maxit = svdOpts.maxit*2 end But from what I can tell, when you use 'L' as the third argument, the fourth argument is ignored, meaning I just have to deal with the fact that it's not converging? I'm not even really sure how to use the 'sigma' argument in place of the 'L' argument. I've also tried reducing the number of SVDs calculated to no avail. Any help on this matter would be much appreciated. EDIT While following up on the comments below, I found that the problem had to do with the way I was building my data matrices. Turned out I had accidentally inverted a matrix and had an input of size (4000x1) rather than (20x200), which was what was refusing to converge. I also did some more timing tets and found that the fourth argument was not, in fact, being ignored, so that's on me. Thanks for the help guys.

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • iPhone OpenGLES: Textures are consuming too much memory and the program crashes with signal "0"

    - by CustomAppsMan
    I am not sure what the problem is. My app is running fine on the simulator but when I try to run it on the iPhone it crashes during debugging or without debugging with signal "0". I am using the Texture2D.m and OpenGLES2DView.m from the examples provided by Apple. I profiled the app on the iPhone with Instruments using the Memory tracer from the Library and when the app died the final memory consumed was about 60Mb real and 90+Mb virtual. Is there some other problem or is the iPhone just killing the application because it has consumed too much memory? If you need any information please state it and I will try to provide it. I am creating thousands of textures at load time which is why the memory consumption is so high. Really cant do anything about reducing the number of pics being loaded. I was running before on just UIImage but it was giving me really low frame rates. I read on this site that I should use OpenGLES for higher frame rates. Also sub question is there any way not to use UIImage to load the png file and then use the Texture class provided to create the texture for OpenGLES functions to use it for drawing? Is there some function in OpenGLES which will create a texture straight from a png file?

    Read the article

  • Reduce file size for charts pasted from excel into word

    - by Steve Clanton
    I have been creating reports by copying some charts and data from an excel document into a word document. I am pasting into a content control, so i use ChartObject.CopyPicture in excel and ContentControl.Range.Paste in word. This is done in a loop: Set ws = ThisWorkbook.Worksheets("Charts") With ws For Each cc In wordDocument.ContentControls If cc.Range.InlineShapes.Count > 0 Then scaleHeight = cc.Range.InlineShapes(1).scaleHeight scaleWidth = cc.Range.InlineShapes(1).scaleWidth cc.Range.InlineShapes(1).Delete .ChartObjects(cc.Tag).CopyPicture Appearance:=xlScreen, Format:=xlPicture cc.Range.Paste cc.Range.InlineShapes(1).scaleHeight = scaleHeight cc.Range.InlineShapes(1).scaleWidth = scaleWidth ElseIf ... Next cc End With Creating these reports using Office 2007 yielded files that were around 6MB, but creating them (using the same worksheet and document) in Office 2010 yields a file that is around 10 times as large. After unzipping the docx, I found that the extra size comes from emf files that correspond to charts that are pasted in using VBA. Where they range from 360 to 900 KB before, they are 5-18 MB. And the graphics are not visibly better. I am able to CopyPicture with the format xlBitmap, and while that is somewhat smaller, it is larger than the emf generated by Office 2007 and noticeably poorer quality. Are there any other options for reducing the file size? Ideally, I would like to produce a file with the same resolution for the charts as I did using Office 2007. Is there any way that uses VBA only (without modifying the charts in the spreadsheet)?

    Read the article

  • Hashing 11 byte unique ID to 32 bits or less

    - by MoJo
    I am looking for a way to reduce a 11 byte unique ID to 32 bits or fewer. I am using an Atmel AVR microcontroller that has the ID number burned in at the factory, but because it has to be transmitted very often in a very low power system I want to reduce the length down to 4 bytes or fewer. The ID is guaranteed unique for every microcontroller. It is made up of data from the manufacturing process, basically the coordinates of the silicone on the wafer and the production line that was used. They look like this: 304A34393334-16-11001000 314832383431-0F-09000C00 Obviously the main danger is that by reducing these IDs they become non-unique. Unfortunately I don't have a large enough sample size to test how unique these numbers are. Having said that because there will only be tens of thousands of devices in use and there is secondary information that can be used to help identify them (such as their approximate location, known at the time of communication) collisions might not be too much of an issue if they are few and far between. Is something like MD5 suitable for this? My concern is that the data being hashed is very short, just 11 bytes. Do hash functions work reliably on such short data?

    Read the article

  • What could the negative effects be of attaching to a process as a debugger?

    - by I_like_traffic_lights
    Background A client of mine has a major problem. They have a CRM system, which was created by a single person over a period of 9 years. Unfortunatelly, a few weeks ago, this person died. I believe the company has learned their lesson, and they have started a project of rewriting the CRM system to a modern platform. I have been hired to create a solution in the meantime to make adaptations to the CRM system. I have given up understanding the code, as this would take too long. My solution, is therefore, to make a window and show this on top of the CRM system, whenever this CRM system is showing. This part works fine, but my major problem is extracting the data from the CRM system. Proposed solution After excluding 6 approaches, including runtime code injection, memory searching, database integration, I have arrived at attaching to the process as a debugger, so I get notified about event, and use this in combination with reading from process memory. This approach seems to work, but I am worried about possible side-effects of this approach. Question What are the dangers of using this in a production environment, where there are 250 employees utilizing the system. Needless to say, I cannot risk reducing the already shaky stability of the system.

    Read the article

  • JS file called twice in Wordpress

    - by Dxr Tw
    I'm trying to reduce number of requests by reducing number of JS files on WP site. I successfully combined around 7 javascript files into one (site.js). Now, I'm using a plugin which has its own JS file (pluginA.js), I want to include (pluginA.js) in that site.js. However, if I simply copy pluginA JS content to site.js and then change location to /files/site.js, firebug's NET tab shows that site.js is requested/called twice. I presume this is due to wp_enqueue_script. How can I make it not call site.js second time but just look into already loaded site.js? Maybe there's alternative to wp_enqueue_script? Plugin's php file: add_action( 'wp_enqueue_scripts', 'testplugin_scripts'); function testplugin_scripts() { /*global $testplugin_version; */ $default_selector = 'li:has(ul) > a'; $default_selector_leaf = 'li li li:not(:has(ul)) > a'; wp_enqueue_scripts('test-plugin', site_url('/files/site.js', __FILE__), array('jquery'), $testplugin_version); $params = array( 'selector' => apply_filters('testplugin_selector', $default_selector), 'selector_leaf' => apply_filters('testplugin_selector_leaf', $default_selector_leaf) ); wp_localize_script('test-plugin', 'testplugin_params', $params); }

    Read the article

  • Creation of model in core data on the fly

    - by user1740045
    How can we create a model in core data on the fly? I.e getting the schema of database from somewhere and then creating a Core Data Object graph? *QuesTion:* Yes thats fine, agreed with all the advantages. But, can anybody can tell practically, what is the benefit of integrating Core Data into project instead of using SQL directly. 1.No need to write SQL boiler plate code [but need to learn Core Data Model (steep curve)] 2.WE can undo and redo changes [but practically who needs it] 3.we can migrate to another schema [that can be done by SQLite as well jus need to add another field into table] 4.For say aggregation on some field in table,in Core Data we need to loop through Core Data Objects whereas in SQLite we need to first write SQLite Boiler Plate Code and then the basic aggregation SQL query,which is easy to write,only length of code will increase...But in case of Core Data (need to learn a lot). So apart from reducing the length of Code,does it actually adds value to project? or in terms of Memory Efficiency,Performance,etc.. PS: If anybody has actualy worked on Core Data(Model Creation On the Fly) , if possible share and gve pointers..thanks!

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Supersized, show a div depending on the slide

    - by Dlacrem
    EDIT: So, I want to take a different approach to try and clarify what I'm looking for, I've been reading as much as I could these last hours and I feel I have a better grasp of how the system works, but I still don't know how to accomplish my goal. Reference links: -SUPERSIZED API So, I have the following script: <script type="text/javascript"> $(document.body).ready(function () { if (vars.current_slide == 5){**-I want it to display a div if the slider is the number 5-** } }); </script> My questions are these: 1) What do I add in the - - area to display a div that only shows up when the slider is at the 5th slide? 2) Am I doing the rest right or am I missing something? I created that code so it could be terribly wrong lol, hopefuly I got something right :P That's all, hope this clarifies it. Thank you all for the help! I'm really excited to start creating my own scripts :D Alright, so I've uploaded the site with the code you posted and it still doesn't seem to work, I also tried reducing the code to the conditional and the result only but that didn't work either, mind checking it out? Click Here Thanks for giving it a thought!

    Read the article

  • Given a 2d array sorted in increasing order from left to right and top to bottom, what is the best w

    - by Phukab
    I was recently given this interview question and I'm curious what a good solution to it would be. Say I'm given a 2d array where all the numbers in the array are in increasing order from left to right and top to bottom. What is the best way to search and determine if a target number is in the array? Now, my first inclination is to utilize a binary search since my data is sorted. I can determine if a number is in a single row in O(log N) time. However, it is the 2 directions that throw me off. Another solution I could use, if I could be sure the matrix is n x n, is to start at the middle. If the middle value is less than my target, then I can be sure it is in the left square portion of the matrix from the middle. I then move diagnally and check again, reducing the size of the square that the target could potentially be in until I have honed in on the target number. Does anyone have any good ideas on solving this problem? Example array: Sorted left to right, top to bottom. 1 2 4 5 6 2 3 5 7 8 4 6 8 9 10 5 8 9 10 11

    Read the article

  • Replacing repetitively occuring loops with eval in Javascript - good or bad?

    - by Herc
    Hello stackoverflow! I have a certain loop occurring several times in various functions in my code. To illustrate with an example, it's pretty much along the lines of the following: for (var i=0;i<= 5; i++) { function1(function2(arr[i],i),$('div'+i)); $('span'+i).value = function3(arr[i]); } Where i is the loop counter of course. For the sake of reducing my code size and avoid repeating the loop declaration, I thought I should replace it with the following: function loop(s) { for (var i=0;i<= 5; i++) { eval(s); } } [...] loop("function1(function2(arr[i],i),$('div'+i));$('span'+i).value = function3(arr[i]);"); Or should I? I've heard a lot about eval() slowing code execution and I'd like it to work as fast as a proper loop even in the Nintendo DSi browser, but I'd also like to cut down on code. What would you suggest? Thank you in advance!

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28

    - by pinaldave
    This is a very interesting wait type and quite often seen as one of the top wait types. Let us discuss this today. From Book On-Line: Occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed. SOS_SCHEDULER_YIELD Explanation: SQL Server has multiple threads, and the basic working methodology for SQL Server is that SQL Server does not let any “runnable” thread to starve. Now let us assume SQL Server OS is very busy running threads on all the scheduler. There are always new threads coming up which are ready to run (in other words, runnable). Thread management of the SQL Server is decided by SQL Server and not the operating system. SQL Server runs on non-preemptive mode most of the time, meaning the threads are co-operative and can let other threads to run from time to time by yielding itself. When any thread yields itself for another thread, it creates this wait. If there are more threads, it clearly indicates that the CPU is under pressure. You can fun the following DMV to see how many runnable task counts there are in your system. SELECT scheduler_id, current_tasks_count, runnable_tasks_count, work_queue_count, pending_disk_io_count FROM sys.dm_os_schedulers WHERE scheduler_id < 255 GO If you notice a two-digit number in runnable_tasks_count continuously for long time (not once in a while), you will know that there is CPU pressure. The two-digit number is usually considered as a bad thing; you can read the description of the above DMV over here. Additionally, there are several other counters (%Processor Time and other processor related counters), through which you can refer to so you can validate CPU pressure along with the method explained above. Reducing SOS_SCHEDULER_YIELD wait: This is the trickiest part of this procedure. As discussed, this particular wait type relates to CPU pressure. Increasing more CPU is the solution in simple terms; however, it is not easy to implement this solution. There are other things that you can consider when this wait type is very high. Here is the query where you can find the most expensive query related to CPU from the cache Note: The query that used lots of resources but is not cached will not be caught here. SELECT SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC -- CPU time You can find the most expensive queries that are utilizing lots of CPU (from the cache) and you can tune them accordingly. Moreover, you can find the longest running query and attempt to tune them if there is any processor offending code. Additionally, pay attention to total_worker_time because if that is also consistently higher, then  the CPU under too much pressure. You can also check perfmon counters of compilations as they tend to use good amount of CPU. Index rebuild is also a CPU intensive process but we should consider that main cause for this query because that is indeed needed on high transactions OLTP system utilized to reduce fragmentations. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >