Search Results

Search found 2309 results on 93 pages for 'caching'.

Page 71/93 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • InvalidCastException: System.Web.UI.PartialCachingControl -> MyCustomControl when OutputCaching

    - by marcinn
    The problem: I am unable to use OutputCaching with my controls which derives from MyCustomControl. Controls are loaded dynamically using definitions from database with Page.LoadControl method. When I add to ascx <%@ OutputCache VaryByParam="*" Duration="3600"% the "InvalidCastException: System.Web.UI.PartialCachingControl - MyCustomControl" exception is thrown. I am unable to modify assembly witch contains dynamic loading controls logic. Is there any way to fix it in derived controls? The second question is about iis7 and native output caching - is it resolves this problem? (I tried to set up several performance counters and I saw that cache wasn't hit...)

    Read the article

  • How can I make a boring project (another WordPress site) interesting?

    - by Christopher Altman
    WordPress is my example, but the question can be generalized to any technology that is not particularly interesting. To me, WordPress takes away the intellectually gratifying pieces of coding. I would rather write a new version of WordPress than write a WordPress theme and glue together some plugins. I am using WP because my company dictates the platform for some of our clients (I do not disagree with the choice from a business perspective, WP is quick and cheap to implement). My question is, how can I make my next WordPress project interesting? I want to advance my understanding of the fundamentals of programming (aka data structures, algorithms, and caching) but do not see how I can achieve this when coding another WP site. I have a fairly tight understanding of front-end technologies and believe I have made WP do things it was never intended to do. Examples are here and here. Solving front-end related problems is not as interesting as coding a full stack application. Any advice will help.

    Read the article

  • Using CCSpriteFrameCache without CCSpriteBatchNode

    - by AwDogsGo2Heaven
    I want to know if there is any benefiting caching a sprite sheet and accessing the sprite by frame without using the CCSpriteBatchNode? In some parts of my game the sprite batch node is useful because there is a lot on the screen, however on another part its not, because there are just a few things, and there are requirements for layers so CCSpriteBatchNode wouldn't be useful. However, for the sake of consistency I would like to use Sprite Sheets for all my sprites, and so was beginning to wonder if I would still receive any benefit from it? (Or worse that it could some how be slower...)

    Read the article

  • strategy to allocate/free lots of small objects

    - by aaa
    hello I am toying with certain caching algorithm, which is challenging somewhat. Basically, it needs to allocate lots of small objects (double arrays, < 256 elements), with objects accessible through mapped value, map[key] = array. time to initialized array may be quite large, generally more than 10 thousand cpu cycles. By lots I mean around gigabyte in total. objects may need to be popped/pushed as needed, generally in random places, one object at a time. lifetime of an object is generally long, minutes or more, however, object may be subject to allocation/deallocation several times during duration of program. What would be good strategy to avoid memory fragmentation, while still maintaining reasonable allocate deallocate speed? I am using C++, so I can use new and malloc. Thanks. I know there a similar questions on website, http://stackoverflow.com/questions/2156745/efficiently-allocating-many-short-lived-small-objects, are somewhat different, thread safety is not immediate issue for me.

    Read the article

  • What scenarios/settings will result in a query on SQL Server (2008) return stale data

    - by s1mm0t
    Most applications rarely need to display 100% accurate data. For example if this stack overflow question displays that there have been 0 views, when there have really been 10, it doesn't really matter. This is one way that the (perceived) performance of applications can be improved, by caching results and therefore sometimes not showing 100% accurate results. There are some cases where the data does need to be 100% accurate though. So if I run the query select * from Foo I want to be sure that the results are not stale. Now depending on how my database is set up, other activity on the database, use of transactions and isolation levels etc this query may or may not be a true reflection of the world. What scenario's and settings can people think of that will result in this query returning stale results or given that another connection is part way through a transaction that has updated this table, how can I guarantee that when the above query returns, the results will be accurate.

    Read the article

  • SQL: Optimize insensive SELECTs on DateTime fields

    - by Fedyashev Nikita
    I have an application for scheduling certain events. And all these events must be reviewed after each scheduled time. So basically we have 3 tables: items(id, name) scheduled_items(id, item_id, execute_at - datetime) - item_id column has an index option. reviewed_items(id, item_id, created_at - datetime) - item_id column has an index option. So core function of the application is "give me any items(which are not yet reviewed) for the actual moment". How can I optimize this solution for speed(because it is very core business feature and not micro optimization)? I suppose that adding index to the datetime fields doesn't make any sense because the cardinality or uniqueness on that fields are very high and index won't give any(?) speed-up. Is it correct? What would you recommend? Should I try no-SQL? -- mysql -V 5.075 I use caching where it makes sence.

    Read the article

  • Sharing data between two instances in a load balanced java web application

    - by rabbit
    I want to cache data in a java web application deployed on multiple instances. We are using spring 2.5.6. What is the easiest caching library to configure and use with spring? I have heard of EH Cache, but the configuration is too cumbersome. The requirement is that a spring scheduler will run and set some flags. These flags are accessible from all load balanced instances. But since the scheduler runs only on one instance the flag is set only on that jvm. So how do i make these updated flag values available to all load balanced instances?

    Read the article

  • Same Salt, Different Encrypted Password is not working? Using Linq to update password.

    - by Xaisoft
    Hello, I am running into a wall regarding changing the password and was wondering if anyone had any ideas. Here are the database values prior to changing the password: Clear Text password = abc1980 Encrypted Password = Yn1N5l+4AUqkOM3WYO7ww/sCN+o= Salt = 82qVIhUIoblBRIRvFSZ1fw== After I change my password to abc1973, salt remains the same, but the Encrypted Password changes which is supposed to happen: Encrypted Password = rHtjLq3qxAl/7T1GfkxrsHzPsNk= However, when I try to login with abc1973 as the password, it does not login. If I try abc1980, it logs me in. It is updating the database, is it caching the values somewhere? Any ideas?

    Read the article

  • Does jquery require timestamp on GET Calls in IE7?

    - by Mithun P
    Please see the jQuery code below, it used to paginate some search results paginate: function() { $("#wishlistPage .results").html("<div id='snakeSpinner'><img src='"+BASE_URL+"images/snake.gif' title='Loading' alt='...'/></div>"); var url = BASE_URL+"wishlist/wishlist_paginated/"; $.ajax({ type: "GET", url: url, data: { sort_by:$('#componentSortOrder input:hidden').val(), offset:My.WishList.offset, per_page: 10, timestamp: new Date().getTime() }, success: function(transport){ $("#wishlistPage .results").html(transport); } }); }, My issue is not with the pagination, issue is when i need to call this same function when something happed to other part of the page which remove some search results, it brings the old results in IE7, other browsers works fine. So added the timestamp: new Date().getTime() part. That fixed the IE issue. I want o know why this happens in jQuery? Do I need to include a timestamp parameter to URL to avoid caching in all jQuery Ajax calls?

    Read the article

  • replay in django + apcahce + mod_wsgi ??

    - by Adam
    I have a simple django page that has a counter on it. I use Apache2 and mod_wsgi to serve it. First, when I enter this page, the counter shows 0, as it should. The second time when I enter the page the counter shows 1 - again, it is the right behavior. The problem begins now, cause when I enter this page the third time, I get 0 again. When I refresh it goes between 0, and 1, clearly using some cache or so. If I wait for some time and then try again, it will show 2, and 3, but will be stuck with those values, till this cache or whatever it is will be flushed, and then the counter continues. Does somebody knows how I can get it work right (the real scenario deals with getting data from the DB, but the problems with this strange cache are the same). BTW, I don't have any caching engine set in my django settings.

    Read the article

  • No route matches - after login attempt - even though the route exists?

    - by datorum
    I am working on a rails application and added a simple login system according to a book. I created the controller admin: rails generate controller admin login logout index It added the following routes to routes.db get "admin/login" get "admin/logout" get "admin/index" I can got to http://localhost:3000/admin/login there is no problem at all. But when I try to login I get: No route matches "/admin/login"! Now, the first confusing part is that the "login" method of my AdminController is not executed at all. The second confusing part is that this code works like a charm - redirects everything to /admin/login: def authorize unless User.find_by_id(session[:user_id]) flash[:notice] = "you need to login" redirect_to :controller => 'admin', :action => 'login' end end Sidenotes: I restarted the server several times. I tried a different browser - to be sure there is no caching problem.

    Read the article

  • Redis suggesstion for selecting data type

    - by PHP Connect
    We have questions based where in home page we were showing 2 list Questions by date modified Question have bigger views and ans count. And in this both listing if question have same views or ans count then sorting is based on date. Previously i am directly quiring to MySQL database and fetching the values so it's easy. But each page request hitting to MySQL it's bit expensive then start doing caching. I started using Redis. Following is the cases when i use redis cache Issues is On second listing i have to display questions by votes and not answered combine. How can i stored this type of data in redis to load faster with sorting based by 2 conditions votes with time and ans count with time?

    Read the article

  • Optimising a query for Top 5% of users

    - by Nai
    On my website, there exists a group of 'power users' who are fantastic and adding lots of content on to my site. However, their prolific activities has led to their profile pages slowing down a lot. For 95% of the other users, the SPROC that is returning the data is very quick. It's only for these group of power users, the very same SPROC is slow. How does one go about optimising the query for this group of users? You can assume that the right indexes have already been constructed. EDIT: Ok, I think I have been a bit too vague. To rephrase the question, how can I optimise my site to enhance the performance for these 5% of users. Given that this SPROC is the same one that is in use for every user and that it is already well optimised, I am guessing the next steps are to explore caching possibilities on the data and application layers?

    Read the article

  • HTTP Headers: Max-Age vs Expires – Which One To Choose?

    - by Gopinath
    Caching of static content like images, scripts, styles on the client browser reduces load on the webservers and also improves end users browsing experience by loading web pages quickly. We can use HTTP headers Expires or Cache-Control:max-age to cache content on client browser and set expiry time for them. Expire header is HTTP/1.0 standard and Cache-Control:max-age is introduced in HTTP/1.1 specification to solve the issues and limitation with Expire  header. Consider the following headers.   Cache-Control: max-age=24560 Expires: Tue, 15 May 2012 06:17:00 GMT The first header instructs web browsers to cache the content for 24560 seconds relative to the time the content is downloaded and expire it after the time period elapses. The second header instructs web browser to expiry the content after 15th May 2011 06:17. Out of these two options which one to use – max-age or expires? I prefer max-age header for the following reasons As max-age  is a relative value and in most of the cases it makes sense to set relative expiry date rather than an absolute expiry date. Expire  header values are complex to set – time format should be proper, time zones should be appropriate. Even a small mistake in settings these values results in unexpected behaviour. As Expire header values are absolute, we need to  keep changing them at regular intervals. Lets say if we set 2011 June 1 as expiry date to all the image files of this blog, on 2011 June 2 we should modify the expiry date to something like 2012 Jan 1. This add burden of managing the Expire headers. Related: Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively cc image flickr:rogue3w This article titled,HTTP Headers: Max-Age vs Expires – Which One To Choose?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • What’s new in Silverlight 4 RC?

    - by pluginbaby
    I am here in Las Vegas for MIX10 where Scott Guthrie announced today the release of Silverlight 4 RC and the Visual Studio 2010 tools. You can now install VS2010 RC!!! As always, downloads links are here: www.silverlight.net He also said that the final version of Silverlight 4 will come next month (so april)! 4 months ago, I wrote a blog post on the new features of Silverlight 4 beta, so… what’s new in the RC ?   Rich Text · RichTextArea renamed to RichTextBox · Text position and selection APIs · “Xaml” property for serializing text content · XAML clipboard format · FlowDirection support on Runs tag · “Format then type” support when dragging controls to the designer · Thai/Vietnamese/Indic support · UI Automation Text pattern   Networking · UploadProgress support (Client stack) · Caching support (Client stack) · Sockets security restrictions removal (Elevated Trust) · Sockets policy file retrieval via HTTP · Accept-Language header   Out of Browser (Elevated Trust) · XAP signing · Silent install and emulation mode · Custom window chrome · Better support for COM Automation · Cancellable shutdown event · Updated security dialogs   Media · Pinned full-screen mode on secondary display · Webcam/Mic configuration preview · More descriptive MediaSourceStream errors · Content & Output protection updates · Updates to H.264 content protection (ClearNAL) · Digital Constraint Token · CGMS-A · Multicast · Graphics card driver validation & revocation   Graphics and Printing · HW accelerated Perspective Transforms · Ability to query page size and printable area · Memory usage and perf improvements   Data · Entity-level validation support of INotifyDataErrorInfo for DataGrid · XPath support for XML   Parser · New architecture enables future innovation · Performance and stability improvements · XmlnsPrefix & XmlnsDefinition attributes · Support setting order-dependent properties   Globalization & Localization · Support for 31 new languages · Arabic, Hebrew and Thai input on Mac · Indic support   More … · Update to DeepZoom code base with HW acceleration · Support for Private mode browsing · Google Chrome support (Windows) · FrameworkElement.Unloaded event · HTML Hosting accessibility · IsoStore perf improvements · Native hosting perf improvements (e.g., Bing Toolbar) · Consistency with Silverlight for Mobile APIs and Tooling · SDK   - System.Numerics.dll   - Dynamic XAP support (MEF)   - Frame/Navigation refresh support   That’s a lot!   You will find more details on the following links: http://timheuer.com/blog/archive/2010/03/15/whats-new-in-silverlight-4-rc-mix10.aspx http://www.davidpoll.com/2010/03/15/new-in-the-silverlight-4-rc-xaml-features/   Technorati Tags: Silverlight

    Read the article

  • Comix is an Awesome Comics Archive Viewer for Linux

    - by Asian Angel
    Do you have a terrific collection of comics in electronic form but need a great app to view them with? If you have a Linux system then we have the perfect app for you…Comix, the open source comic reading powerhouse. For our example we installed Comix on our Ubuntu 10.10 system. Just go to the Ubuntu Software Center and conduct a quick search. When you go to install Comix in the Ubuntu Software Center, make sure to scroll all the way to the bottom and select Unarchiver for .rar files. The listing appears as a “non-free version” for some reason, but displays as free once selected. Odd, but nothing to worry about in the end… Once Comix is installed you can find it in the Graphics Section of the Ubuntu Menu. Comix also comes with a nice set of options to let you customize the app to best suit those important comic reading needs. Here is a comprehensive list of the features this little comic reading powerhouse packs into one easy to use package: Fullscreen mode, double page mode, fit-to-screen mode, zooming and scrolling, rotation and mirroring, magnification lens, changeable image scaling quality, image enhancement, can read right-to-left to fit manga, etc., caching for faster page flipping, bookmarks support, customizable GUI, archive comments support, archive converter, thumbnail browser, standards compliant, available in multiple languages (English, Swedish, Simplified Chinese, Spanish, Brazilian Portuguese, & German), reads “JPEG, PNG, TIFF, GIF, BMP, ICO, XPM, & XBM” image formats, reads “ZIP & tar archives natively, RAR archives through the unrar program” runs on Linux, FreeBSD, NetBSD, and virtually any other UNIX-like OS, and more! Have fun reading those comics on your favorite Linux system! Interested in learning more about Comix? Then be certain to drop by the homepage! Comix Homepage Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • System crashes/lockups + compiz/cairo/gnome-panel crashing due to cached ram, please help?

    - by Kristian Thomson
    Can someone help me to troubleshoot system crashes and lockups which result in compiz/cairo dock and gnome-panel crashing? I also get no window borders after the crash and a lot of kernel memory errors. Logs are telling me that apps were killed due to not enough memory, but the system is caching like 14GB of my ram so I'm a bit stuck on what/how to stop it. I'm running Ubuntu 12.10 on a 2011 Mac Mini with 16 GB ram. Here's some of the logs that look like they could be causing trouble. I woke up this morning to find chrome/skype/cairo dock and a few others had been killed and here is what the log said. Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.959890] Out of memory: Kill process 12247 (chromium-browse) score 101 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.959893] Killed process 12247 (chromium-browse) total-vm:238948kB, anon-rss:17064kB, file-rss:20008kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.972283] Out of memory: Kill process 10976 (dropbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.972288] Killed process 10976 (dropbox) total-vm:316392kB, anon-rss:115484kB, file-rss:16504kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.975890] Out of memory: Kill process 10887 (rhythmbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.975895] Killed process 11515 (tray_icon_worke) total-vm:63336kB, anon-rss:15960kB, file-rss:11436kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.281535] Out of memory: Kill process 10887 (rhythmbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.281539] Killed process 10887 (rhythmbox) total-vm:528980kB, anon-rss:92272kB, file-rss:36520kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.283110] Out of memory: Kill process 10889 (skype) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.283113] Killed process 10889 (skype) total-vm:415056kB, anon-rss:84880kB, file-rss:22160kB I went to look deeper into things and saw that the whole time I'm having these kernel errors with out of memory and something mentioning radeon. I have a Radeon HD 6600M graphics card using the open source driver, not the proprietary one. I was wondering if perhaps using the proprietary one would solve the problem. Also, while writing this in Chrome rhythmbox and chrome just got killed while typing this, due to out of memory errors or so it reports, though I have 7 GB of free RAM at the time with 7 GB cached as well. Here is a full copy of my logs that happened in kern.log simply from when I began typing this question. http://pastebin.com/cdxxDktG Thanks in advance, Kris

    Read the article

  • Oracle Coherence 3.5 : Create Internet-scale applications using Oracle's high-performance data grid

    - by frederic.michiara
    Oracle Coherence Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. Coherence has no single points of failure; it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it automatically joins the cluster and Coherence fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and transparent soft re-start capability to enable servers to self-heal. For the ones looking at an easy reading and first good approach to Oracle Coherence, I would recommend reading the following book : Overview of Oracle Coherence 3.5 Build scalable web sites and Enterprise applications using a market-leading data grid product Design and implement your domain objects to work most effectively with Coherence and apply Domain Driven Designs (DDD) to Coherence applications Leverage Coherence events and continuous queries to provide real-time updates to client applications Successfully integrate various persistence technologies, such as JDBC, Hibernate, or TopLink, with Coherence Filled with numerous examples that provide best practice guidance, and a number of classes you can readily reuse within your own applications This book is targeted to Architects and developers, and as in our team we're more about Solutions Architects than developers I found interest in this book as it help to understand better Oracle Coherence and its value. The only point I may not agree with the authors is that Oracle Coherence is not an alternative to Oracle RAC in providing High Availability, but combining both Oracle RAC and Oracle Coherence will help Architects and Customers to reach higher level of service and high-availability. This book is available on https://www.packtpub.com/oracle-coherence-3-5/book Need to find out about Table of contents : https://www.packtpub.com/toc/oracle-coherence-35-table-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/6125_Oracle%20Coherence_SampleChapter.pdf Read also articles from the Authors on http://www.packtpub.com/ : Working with Aggregators in Oracle Coherence 3.5 Working with Value Extractors and Simplifying Queries in Oracle Coherence 3.5 Querying the Data Grid in Coherence 3.5: Obtaining Query Results and Using Indexes Installing Coherence 3.5 and Accessing the Data Grid: Part 1 Installing Coherence 3.5 and Accessing the Data Grid: Part 2 For more information on Oracle Coherence : What Oracle Coherence Can Do for You... : http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_solutions.html Oracle Coherence on OTN : http://www.oracle.com/technology/products/coherence/index.html Oracle Coherence Knowledge Base : http://coherence.oracle.com/display/COH/Oracle+Coherence+Knowledge+Base+Home

    Read the article

  • User Group Meeting Summary - April 2010

    - by Michael Stephenson
    Thanks to everyone who could make it to what turned out to be an excellent SBUG event.  First some thanks to:  Speakers: Anthony Ross and Elton Stoneman Host: The various people at Hitachi who helped to organise and arrange the venue.   Session 1 - Getting up and running with Windows Mobile and the Windows Azure Service Bus In this session Anthony discussed some considerations for using Windows Mobile and the Windows Azure Service Bus from a real-world project which Hitachi have been working on with EasyJet.  Anthony also walked through a simplified demo of the concepts which applied on the project.   In addition to the slides and demo it was also very interesting to discuss with the guys involved on this project to hear about their real experiences developing with the Azure Service Bus and some of the limitations they have had to work around in Windows Mobiles ability to interact with the service bus.   On the back of this session we will look to do some further activities around this topic and the guys offered to share their wish list of features for both Windows Mobile and Windows Azure which we will look to share for user group discussion.   Another interesting point was the cost aspects of using the ISB which were very low.   Session 2 - The Enterprise Cache In the second session Elton used a few slides which are based around one of his customer scenario's where they are looking into the concept of an Enterprise Cache within the organisation.  Elton discusses this concept and also a codeplex project he is putting together which allows you to take advantage of a cache with various providers such as Memcached, AppFabric Caching and Ncache.   Following the presentation it was interesting to hear peoples thoughts on various aspects such as the enterprise cache versus an out of process application cache.  Also there was interesting discussion around how people would like to search the cache in the future.   We will again look to put together some follow-up activity on this   Meeting Summary Following the meeting all slide decks are saved in the skydrive location where we keep content from all meetings: http://cid-40015ea59a1307c8.skydrive.live.com/browse.aspx/.Public/SBUG/SBUG%20Meetings/2010%20April   Remember that the details of all previous events are on the following page. http://uksoabpm.org/Events.aspx   Competition We had three copies of the Windows Identity Foundation Patterns and Practices book that were raffles on the night, it would be great to hear any feedback on the book from those who won it.   Recording The user group meeting was recorded and we will look to make this available online sometime soon.   UG Business The following things were discussed as general UG topics:   We will change the name of the user group to the UK Connected Systems User Group to we are more inline with other user groups who cover similar topics and we believe this will help us to attract more members.  The content or focus of the user group is not expected to change.   The next meeting is 26th May and can be registered at the following link: http://sbugmay2010.eventbrite.com/

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • Architecture strategies for a complex competition scoring system

    - by mikewassmer
    Competition description: There are about 10 teams competing against each other over a 6-week period. Each team's total score (out of a 1000 total available points) is based on the total of its scores in about 25,000 different scoring elements. Most scoring elements are worth a small fraction of a point and there will about 10 X 25,000 = 250,000 total raw input data points. The points for some scoring elements are awarded at frequent regular time intervals during the competition. The points for other scoring elements are awarded at either irregular time intervals or at just one moment in time. There are about 20 different types of scoring elements. Each of the 20 types of scoring elements has a different set of inputs, a different algorithm for calculating the earned score from the raw inputs, and a different number of total available points. The simplest algorithms require one input and one simple calculation. The most complex algorithms consist of hundreds or thousands of raw inputs and a more complicated calculation. Some types of raw inputs are automatically generated. Other types of raw inputs are manually entered. All raw inputs are subject to possible manual retroactive adjustments by competition officials. Primary requirements: The scoring system UI for competitors and other competition followers will show current and historical total team scores, team standings, team scores by scoring element, raw input data (at several levels of aggregation, e.g. daily, weekly, etc.), and other metrics. There will be charts, tables, and other widgets for displaying historical raw data inputs and scores. There will be a quasi-real-time dashboard that will show current scores and raw data inputs. Aggregate scores should be updated/refreshed whenever new raw data inputs arrive or existing raw data inputs are adjusted. There will be a "scorekeeper UI" for manually entering new inputs, manually adjusting existing inputs, and manually adjusting calculated scores. Decisions: Should the scoring calculations be performed on the database layer (T-SQL/SQL Server, in my case) or on the application layer (C#/ASP.NET MVC, in my case)? What are some recommended approaches for calculating updated total team scores whenever new raw inputs arrives? Calculating each of the teams' total scores from scratch every time a new input arrives will probably slow the system to a crawl. I've considered some kind of "diff" approach, but that approach may pose problems for ad-hoc queries and some aggegates. I'm trying draw some sports analogies, but it's tough because most games consist of no more than 20 or 30 scoring elements per game (I'm thinking of a high-scoring baseball game; football and soccer have fewer scoring events per game). Perhaps a financial balance sheet analogy makes more sense because financial "bottom line" calcs may be calculated from 250,000 or more transactions. Should I be making heavy use of caching for this application? Are there any obvious approaches or similar case studies that I may be overlooking?

    Read the article

  • OracleServiceBus+SOA in same server

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM .For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get SOAand OSB running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.                     Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema  After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.We can also bundle Enterprise Manager in the same installation or in a different server. Here in this case we will use the enterprise manager in the same domain. So we selected the Enterprise Manager component also. There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below   After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA.  Since we would like to have both the servers in one managed server (one JVM) we will have to do one  

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • What Problems Are Better Solved By SOAP Over REST?

    In the battle for web service supremacy SOAP and REST have been battling for years. In my personal opinion this debate should have never existed. Yes, both forms can be used to create an interactive web service, but each form of a service was developed independent of each other to solve two different yet similar problems. Based my research and experience I would have to say that REST should be the preferred web service methodology and SOAP should only be used in specific situations. Note, I did not say that I was against SOAP, and in fact I actually like to use SOAP when it is needed. Criteria for using SOAP: Does the service need a guaranteed level of reliability and security? Did the provider and consumer of the service agreed on a standardized data exchange format? Does the service need data context and state management? If you answer yes to any of these questions, then you may want to consider SOAP as the format for the web service. Another way to look at the relationship between REST and SOAP is to look at the medical field.  For most things a general doctor or you family health care provider can acceptably treat most conditions from the case of a common cold to a broken bone. A general doctor more aligns with REST in my opinion because for most service requirements REST fulfills a projects needs, but what happens if you need more of an advanced examination, you would go to a specialist. A specialist would already have experience dealing with specific issues that you are experiencing giving them specific context to how best treat you going forward. SOAP acts more like a specialist doctor giving that they understand the context of an issue and can treat it based on the state of other patients they have already treated. An example of where I would use SOAP over REST in real life would be a single sign-on application. I n these cases I need to check validate a username and password for authentication and authorization of a web page request. This service would need to maintain state while it authenticated a user and while it validated access to a web page on a subsequent request. This service must process every request for access and not allow caching to ensure that every request is processed and the appropriate users are allowed to view selected web pages. References: Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >