Search Results

Search found 392 results on 16 pages for 'counters'.

Page 10/16 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • WCF Service Memory Leaks

    - by Mubashar Ahmad
    Dear Devs I have a very small wcf service hosted in a console app. [ServiceContract] public interface IService1 { [OperationContract] void DoService(); } [ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall)] public class Service1 : IService1 { public void DoService() { } } and its being called as using (ServiceReference1.Service1Client client = new ServiceReference1.Service1Client()) { client.DoService(new DoServiceRequest()); client.Close(); } Please remember that service is published on basicHttpBindings. Problem Now when i performed above client code in a loop of 1000 i found big difference between "All Heap bytes" and "Private Bytes" performance counters (i used .net memory profiler). After investigation i found some of the objects are not properly disposed following are the list of those objects (1000 undisposed instance were found -- equals to the client calls) (namespace for all of them is System.ServiceModel.Channels) HttpOutput.ListenerResponseHttpOutput.ListenerResponseOutputStream BodyWriterMessage BufferedMessage HttpRequestContext.ListenerHttpContext.ListenerContextHttpInput.ListenerContextInputStream HttpRequestContext.ListenerHttpContext Questions Why do we have lot of undisposed objects and how to control them. Please Help

    Read the article

  • MySQL: Count two things in one query?

    - by Nebs
    I have a "boolean" column in one of my tables (value is either 0 or 1). I need to get two counts: The number of rows that have the boolean set to 0 and the number of rows that have it set to 1. Currently I have two queries: One to count the 1's and the other to count the 0's. Is MySQL traversing the entire table when counting rows with a WHERE condition? I'm wondering if there's a single query that would allow two counters based on different conditions? Or is there a way to get the total count along side the WHERE conditioned count? This would be enough as I'd only have to subtract one count from the other (due to the boolean nature of the column). There are no NULL values. Thanks.

    Read the article

  • What to monitor on MSSQL Server

    - by user361434
    Hi all I have been asked to monitor MSSQL Server (2005 & 2008) and am wondering what are good metrics to look at? I can access WMI counters but am slightly lost as to how much depth is going to be useful. Currently I have on my list: user connections logins per second latch waits per second total latch wait time dead locks per second errors per second Log and data file sizes I am looking to be able to monitor values that will indicate a degradation of performance on the machine or a potential serious issue. To this end I am also wondering at what values some of these things would be considered normal vs problematic? As I reckon it would probably be a really good question to have answered for the general community I thought i'd court some of you DBA experts out there (I am certainly not one of them!) Apologies if a rather open ended question. Ry

    Read the article

  • Maximizing the number of threads to fully utilize all available resources without hindering overall

    - by Matt
    Let's say I have to generate a bunch of result files, and I want to make it as fast as possible. Each result file is generated independently of any other result file; in fact, one could say that each result file is agnostic to every other result file. The resources used to generate each result file is also unique to each. How can I dynamically decide the optimal number of threads to run simultaneously in order to minimize the overall run time? Is my only option to write my own thread manager that watches performance counters and adjust accordingly or does there exists some solid classes that already accomplish this?

    Read the article

  • COMET chat application - IIS7 slows down over time

    - by Yaron
    I have built a chat application which uses this code to push messages to clients (web pages) and to monitor online users and their information. Basically, the code creates and manages a custom thread pool for maintaining the list of connected users & their state. The application was hosted on a shared hosting account (IIS6), and worked fine. After moving the site (ASP.Net App) to a dedicated virtual server it seems I have a problem where IIS7 gets slower and slower as time passes, and my only "solution" is to restart IIS. I am trying to look at the performance counters and have do idea on which one to look.

    Read the article

  • Mapping multiple keys to the same value in a Javascript hash

    - by Bears will eat you
    I use a Javascript hash object to store a set of numerical counters, set up like this [this is greatly simplified]: var myHash = { A: 0, B: 0, C: 0 }; Is there a way to make other keys, ones not explicitly in myHash, map to keys that are? For instance, I'd like [again, this is simplified]: myHash['A_prime']++; // or myHash.A_prime++; to be exactly equivalent to myHash['A']++; // or myHash.A++; e.g. incrementing the value found at the key A, not A_prime.

    Read the article

  • Oracle (PL/SQL): Is UPDATE RETURNING concurrent?

    - by Jaap
    I'm using table with a counter to ensure unique id's on a child element. I know it is usually better to use a sequence, but I can't use it because I have a lot of counters (a customer can create a couple of buckets and each of them needs to have their own counter, they have to start with 1 (it's a requirement, my customer needs "human readable" keys). I'm creating records (let's call them items) that have a prikey (bucket_id, num = counter). I need to guarantee that the bucket_id / num combination is unique (so using a sequence as prikey won't fix my problem). The creation of rows doesn't happen in pl/sql, so I need to claim the number (btw: it's not against the requirements to have gaps). My solution was: UPDATE bucket SET counter = counter + 1 WHERE id = param_id RETURNING counter INTO num_forprikey; PL/SQL returns var_num_forprikey so the item record can be created. Question: Will I always get unique num_forprikey even if the user concurrently asks for new items in a bucket?

    Read the article

  • How to automaticly add a page number to all internal links on the HTML page?

    - by Tomasz
    In print mode, I would like to render such a links: <a href="#targetPage">link</a> like this: <a href="#targetPage">link (page 11)</a> (suppose the target page is on 11. page in print preview). It is possible to add page number using counters to the page footer with plain CSS, but could I use this in more "the way I want it" way? The solution doesn't need to be cross-browser (FF/Chrome is enough), any CSS/Javascript tricks are allowed...

    Read the article

  • Page load fires twice on Firefox - PLEASE HELP !

    - by The_AlienCoder
    OK 1ST SOME BACKGROUND - I have a page showing the number of hits(or views) of any selected item. The hit counter procedure that is called at every page load i.e if (Request.QueryString.HasKeys()) { // get item id from icoming url e.g details.aspx?itemid=26 string itemid = Request.Params["itemid"]; if (!Page.IsPostBack) { countHit(itemid); } } THE PROBLEM - My expectation was that the counter would be increased by 1 on every page load but the counters on my datalist and formview are always behind and stepped by 2 i.e instead of 1, 2, 3, 4 its - 0, 2 , 4, 6 It seems that the page load is firing twice. Later I discovered that this only happens when you are using Mozilla Firefox. The page behaves fine with other browsers like IE This becoming quite frustrating. CAN ANY1 HELP :-(

    Read the article

  • Can Django flush its database(s) between every unit test

    - by mikem
    Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset. Consider a test which pulls data out of the database by primary key: class ChangeLogTest(django.test.TestCase): def test_one(self): do_something_which_creates_two_log_entries() log = LogEntry.objects.get(id=1) assert_log_entry_correct(log) log = LogEntry.objects.get(id=2) assert_log_entry_correct(log) This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails. This is actually a two part question: Is it possible to force ./manage.py test to flush the database between each test case? Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?

    Read the article

  • How to research unmanaged memory leaks in .NET?

    - by Brandon
    I have a WCF service running over MSMQ. Memory gradually increases over time, indicating that there is some sort of memory leak. I ran the service locally and monitored some counters using PerfMon. Total CLR memory managed heap bytes remains relatively constant, while the process' private bytes increases over time. This leads me to believe that there is some sort of unmanaged memory leak. Assuming that unmanaged memory leak is the issue, how do I address the issue? Are there any tools I could use to give me hints as to what is causing the unmanaged memory leak? Also, all my service is doing is reading from the transactional queue and writing to a database, all as part of a DTC transaction (handled under the hood by requiring a transaction on the service contract). I am not doing anything explicitly with COM or DllImports. Thanks in advance!

    Read the article

  • Database design to hold multiple iteration measurements

    - by Valder
    Hi All. I am new to sqlite and SQL in general. I am keen to switch from flat-files to sqlite for holding some measurement information. I need a tip on how to better layout the database, since I have zero experience with this. I have a ~10000 unique statistic counters that are collected before and after each test iteration. Max number of iterations are 10, though it could be less. I was thinking the following: CREATE TABLE stat_names(stat_id, stat_name); CREATE TABLE stats_per_iteration(stat_id, before_iter_1, after_iter_1, before_iter_2, after_iter_2, ...); stat_names table would hold mapping of a full counter to a uniq stat_id. stats_per_iteration table would hold mesurement data 1 + 10 * 2 columns. stat_names.stat_id = stats_per_iteration.stat_id OR maybe I should have a separate table for each iteration? Which would results in 1 + 10 tables in database. Thanks!

    Read the article

  • InvalidCastException: System.Web.UI.PartialCachingControl -> MyCustomControl when OutputCaching

    - by marcinn
    The problem: I am unable to use OutputCaching with my controls which derives from MyCustomControl. Controls are loaded dynamically using definitions from database with Page.LoadControl method. When I add to ascx <%@ OutputCache VaryByParam="*" Duration="3600"% the "InvalidCastException: System.Web.UI.PartialCachingControl - MyCustomControl" exception is thrown. I am unable to modify assembly witch contains dynamic loading controls logic. Is there any way to fix it in derived controls? The second question is about iis7 and native output caching - is it resolves this problem? (I tried to set up several performance counters and I saw that cache wasn't hit...)

    Read the article

  • Highlighting current and previous stars on mouseover

    - by mpet
    I'm trying to make simple five star rating system using Twitter Bootstrap 3 i jQuery. For now, I'm trying to set .hover() and .mouseout() events using counter by writing this code that doesn't work: var i; for (i = 1; i <= 5; i++) { $('#overall_rating_' + i).hover(function(){ $('#overall_rating_' + i).removeClass("glyphicon-star-empty").addClass("glyphicon-star"); }); $('#overall_rating_' + i).mouseout(function(){ $('#overall_rating_' + i).removeClass("glyphicon-star").addClass("glyphicon-star-empty"); }); } Trying to highlight current and previous stars on mouseover. The code is not complete, it would be accompanied by additional sub-counters, but this part doesn't work for now. Any better methods are welcome. What's broken here?

    Read the article

  • SQL SERVER – WRITELOG – Wait Type – Day 17 of 28

    - by pinaldave
    WRITELOG is one of the most interesting wait types. So far we have seen a lot of different wait types, but this log type is associated with log file which makes it interesting to deal with. From Book On-Line: WRITELOG Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. WRITELOG Explanation: This wait type is usually seen in the heavy transactional database. When data is modified, it is written both on the log cache and buffer cache. This wait type occurs when data in the log cache is flushing to the disk. During this time, the session has to wait due to WRITELOG. I have recently seen this wait type’s persistence at my client’s place, where one of the long-running transactions was stopped by the user causing it to roll back. In the future, I will see if I could re-create this situation once again on my machine to validate the relation. Reducing WRITELOG wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. Avoid cursor-like coding methodology and frequent committing of statements. Find the most active file based on IO stall time based on the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. There are two excellent resources by Paul Randal, I suggest you understand the subject from those videos. The links to videos are here and here. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Sixeyed.Caching available now on NuGet and GitHub!

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/22/sixeyed.caching-available-now-on-nuget-and-github.aspxThe good guys at Pluralsight have okayed me to publish my caching framework (as seen in Caching in the .NET Stack: Inside-Out) as an open-source library, and it’s out now. You can get it here: Sixeyed.Caching source code on GitHub, and here: Sixeyed.Caching package v1.0.0 on NuGet. If you haven’t seen the course, there’s a preview here on YouTube: In-Process and Out-of-Process Caches, which gives a good flavour. The library is a wrapper around various cache providers, including the .NET MemoryCache, AppFabric cache, and  memcached*. All the wrappers inherit from a base class which gives you a set of common functionality against all the cache implementations: •    inherits OutputCacheProvider, so you can use your chosen cache provider as an ASP.NET output cache; •    serialization and encryption, so you can configure whether you want your cache items serialized (XML, JSON or binary) and encrypted; •    instrumentation, you can optionally use performance counters to monitor cache attempts and hits, at a low level. The framework wraps up different caches into an ICache interface, and it lets you use a provider directly like this: Cache.Memory.Get<RefData>(refDataKey); - or with configuration to use the default cache provider: Cache.Default.Get<RefData>(refDataKey); The library uses Unity’s interception framework to implement AOP caching, which you can use by flagging methods with the [Cache] attribute: [Cache] public RefData GetItem(string refDataKey) - and you can be more specific on the required cache behaviour: [Cache(CacheType=CacheType.Memory, Days=1] public RefData GetItem(string refDataKey) - or really specific: [Cache(CacheType=CacheType.Disk, SerializationFormat=SerializationFormat.Json, Hours=2, Minutes=59)] public RefData GetItem(string refDataKey) Provided you get instances of classes with cacheable methods from the container, the attributed method results will be cached, and repeated calls will be fetched from the cache. You can also set a bunch of cache defaults in application config, like whether to use encryption and instrumentation, and whether the cache system is enabled at all: <sixeyed.caching enabled="true"> <performanceCounters instrumentCacheTotalCounts="true" instrumentCacheTargetCounts="true" categoryNamePrefix ="Sixeyed.Caching.Tests"/> <encryption enabled="true" key="1234567890abcdef1234567890abcdef" iv="1234567890abcdef"/> <!-- key must be 32 characters, IV must be 16 characters--> </sixeyed.caching> For AOP and methods flagged with the cache attribute, you can override the compile-time cache settings at runtime with more config (keyed by the class and method name): <sixeyed.caching enabled="true"> <targets> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheConfiguredInternal" enabled="false"/> <target keyPrefix="MethodLevelCachingStub.GetRandomIntCacheExpiresConfiguredInternal" seconds="1"/> </targets> It’s released under the MIT license, so you can use it freely in your own apps and modify as required. I’ll be adding more content to the GitHub wiki, which will be the main source of documentation, but for now there’s an FAQ to get you started. * - in the course the framework library also wraps NCache Express, but there's no public redistributable library that I can find, so it's not in Sixeyed.Caching.

    Read the article

  • Large File Upload in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay this is a big BIG B-I-G problem. And with SP2010 it’s going to be more prominent, because atleast at the server side, SharePoint can support large files much much better than SharePoint 2007 ever did. The issue with very large files being uploaded through any browser based API are - Reliably transferring gigabyte or bigger files without breakages over a protocol like HTTP, which is better suited for tiny transfers like images and text. Not killing your browser because it has to load all that in memory Not killing your web server because All that you upload through HTTP post, first gets streamed into IIS Memory, w3wp.exe memory before the ENTIRE FILE finishes uploading .. before it is stored. Which means, You cannot show an accurate and live progress bar of the upload, IIS gives you no such accurate metric of an upload. All the counters it gives you are approximate. Your w3wp.exe eats up all server memory – 4GB of it, for a 4GB upload. A thread is kept busy for the entire duration of the upload, thereby greatly limiting your web server’s capability to serve newer requests. Kills effective load balancing. Not killing your content database because, As you are uploading a very large file, that large file gets written sequentially into the DB, and therefore for a very large file very severely impacts the database performance. I had put together another video showing RBS usage in SharePoint 2010. I talked about many practical ramifications of using RBS in SharePoint in that video. Note that enabling large file support will never ever be a point and click job, simply because there are too many questions one needs to ask, and too many things one needs to plan for. However, one part that will remain common across all large file upload scenarios, in SharePoint or outside of SharePoint is to do it efficiently while not killing the web server. In this video, I describe using the Telerik Silverlight Upload control with SharePoint 2010 to enable efficient large file uploads in SharePoint. Presenting .. The video Comment on the article ....

    Read the article

  • SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer

    - by pinaldave
    Incredibly, SQL Server has so much information to share with us. Every single day, I am amazed with this SQL Server technology. Sometimes I find several interesting information by just querying few of the DMV. And when I present this info in front of my client during performance tuning consultancy, they are surprised with my findings. Today, I am going to share one of the hidden gems of DMV with you, the one which I frequently use to understand what’s going on under the hood of SQL Server. SQL Server keeps the record of most of the operations of the Query Optimizer. We can learn many interesting details about the optimizer which can be utilized to improve the performance of server. SELECT * FROM sys.dm_exec_query_optimizer_info WHERE counter IN ('optimizations', 'elapsed time','final cost', 'insert stmt','delete stmt','update stmt', 'merge stmt','contains subquery','tables', 'hints','order hint','join hint', 'view reference','remote query','maximum DOP', 'maximum recursion level','indexed views loaded', 'indexed views matched','indexed views used', 'indexed views updated','dynamic cursor request', 'fast forward cursor request') All occurrence values are cumulative and are set to 0 at system restart. All values for value fields are set to NULL at system restart. I have removed a few of the internal counters from the script above, and kept only documented details. Let us check the result of the above query. As you can see, there is so much vital information that is revealed in above query. I can easily say so many things about how many times Optimizer was triggered and what the average time taken by it to optimize my queries was. Additionally, I can also determine how many times update, insert or delete statements were optimized. I was able to quickly figure out that my client was overusing the Query Hints using this dynamic management view. If you have been reading my blog, I am sure you are aware of my series related to SQL Server Views SQL SERVER – The Limitations of the Views – Eleven and more…. With this, I can take a quick look and figure out how many times Views were used in various solutions within the query. Moreover, you can easily know what fraction of the optimizations has been involved in tuning server. For example, the following query would tell me, in total optimizations, what the fraction of time View was “reference“. As this View also includes system Views and DMVs, the number is a bit higher on my machine. SELECT (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'view reference') / (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations') AS ViewReferencedFraction Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28

    - by pinaldave
    At first, I was not planning to write about this wait type. The reason was simple- I have faced this only once in my lifetime so far maybe because it is one of the top 5 wait types. I am not sure if it is a common wait type or not, but in the samples I had it really looks rare to me. From Book On-Line: LOGBUFFER Occurs when a task is waiting for space in the log buffer to store a log record. Consistently high values may indicate that the log devices cannot keep up with the amount of log being generated by the server. LOGBUFFER Explanation: The book online definition of the LOGBUFFER seems to be very accurate. On the system where I faced this wait type, the log file (LDF) was put on the local disk, and the data files (MDF, NDF) were put on SanDrives. My client then was not familiar about how the file distribution was supposed to be. Once we moved the LDF to a faster drive, this wait type disappeared. Reducing LOGBUFFER wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. (Make sure your drive where your LDF is has no IO bottleneck issues). Avoid cursor-like coding methodology and frequent commit statements. Find the most-active file based on IO stall time, as shown in the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. If you have noticed, my suggestions for reducing the LOGBUFFER is very similar to WRITELOG. Although the procedures on reducing them are alike, I am not suggesting that LOGBUFFER and WRITELOG are same wait types. From the definition of the two, you will find their difference. However, they are both related to LOG and both of them can severely degrade the performance. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • Desktop Fun: Runic Style Fonts

    - by Asian Angel
    Most of the time regular fonts are just what you need for documents, invitations, or adding text to images. But what if you are in the mood for something unusual or unique to add that perfect touch? If you like older runic style writing, then enjoy finding some new favorites for your collection with our Runic Style Fonts collection. Temple photo by ShinyShiny. Note: To manage the fonts on your Windows 7, Vista, & XP systems see our article here. The Runic Style Fonts Sable Download Worn Manuscript Download JSL Ancient Download Antropos Download Cave Gyrl Download The Roman Runes Alliance Download Ancient Geek Download Troll Download Runish Quill MK *includes two font types Download DS RUNEnglish 2 Download Runes Written *includes two font types Download Wolves And Ravens Download Art Greco Download Dalek Download Glagolitic AOE Download Linear B Download Cartouche Download Greywolf Glyphs *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download Africain *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download Cave Writings *includes 52 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Download For more great ways to customize your computer be certain to look through our Desktop Fun section. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Five Sleek Audi R8 Car Themes for Chrome and Iron MS Notepad Replacement Metapad Returns with a New Beta Version Spybot Search and Destroy Now Available as a Portable App (PortableApps.com) ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing

    Read the article

  • Does Your Customer Engagement Create an Ah Feeling?

    - by Richard Lefebvre
    An (Oracle CX Blog) article by Christina McKeon Companies that successfully engage customers all have one thing in common. They make it seem easy for the customer to get what they need. No one would argue that brands don’t want to leave customers with this “ah” feeling. Since 94% of customers who have a low-effort service experience will buy from that company again, it makes financial sense for brands.1 Some brands are thinking differently about how they engage their customers to create ah feelings. How do they do it? Toyota is a great example of using smart assistance technology to understand customer intent and answer questions before customers hit the submit button online. What is unique in this situation is that Toyota captures intent while customers are filling out email forms. Toyota analyzes the data in the form and suggests responses before the customer sends the email. The customer gets the right answer, and the email never makes it to your contact center — which makes you and the customer happy. Most brands are fully aware of chat as a service channel, but some brands take chat to a whole new level. Beauty.com, part of the drugstore.com and Walgreens family of brands, uses live chat to replicate the personal experience that one would find at high-end department store cosmetic counters. Trained beauty advisors, all with esthetician or beauty counter experience, engage in live chat sessions with online shoppers to share immediate advice on the best products for their personal needs. Agents can watch customer activity online and determine the right time to reach out and offer help, just as help would be offered in a brick-and-mortar store. And, agents can co-browse along with the customer helping customers with online check-out. These personal chat discussions also give Beauty.com the opportunity to present products, advertise promotions, and resolve customer issues when they arise. Beauty.com converts approximately 25% of chat sessions into product orders. Photobox, the European market leader in online photo services, wanted to deliver personal and responsive service to its 24 million members. It ensures customer inquiries on personalized photo products are routed based on agent knowledge so customers get what they need from the company experts. By using a queuing system to ensure that the agent with the most appropriate knowledge handles the query, agent productivity increased while response times to 1,500 customer queries per day decreased. A real-time dashboard prevents agents from being overloaded with queries. This approach has produced financial results with a 15% increase in sales to existing customers and a 45% increase in orders from newly referred customers.

    Read the article

  • Unintentional run-in with C# thread concurrency

    - by geekrutherford
    For the first time today we began conducting load testing on a ASP.NET application already in production. Obviously you would normally want to load test prior to releasing to a production environment, but that isn't the point here.   We ran a test which simulated 5 users hitting the application doing the same actions simultaneously. The first few pages visited seemed fine and then things just hung for a while before the test failed. While the test was running I was viewing the performance counters on the server noting that the CPU was consistently pegged at 100% until the testing tool gave up.   Fortunately the application logs all exceptions including those unhandled to the database (thanks to log4net). I checked the log and low and behold the error was:   System.ArgumentException: An item with the same key has already been added. (The rest of the stack trace intentionally omitted)   Since the code was running with debug on the line number where the exception occured was also provided. I began inspecting the code and almost immediately it hit me, the section of code responsible for the exception is trying to initialize a static class. My next question was how is this code being hit multiple times when I have a rudimentary check already in place to prevent this kind of thing (i.e. a check on a public variable of the static class before entering the initializing routine). The answer...the check fails because the value is not set before other threads have already made it through.   Not being one who consistently works with threading I wasn't quite sure how to handle this problem. Fortunately a co-worker recalled having to lock a section of code in the past but couldn't recall exactly how. After a quick search on Google the solution is as follows:   Object objLock = new Object(); lock(objLock) { //logic requiring lock }   The lock statement takes an object and tells the .NET runtime that the current thread has exclusive access while the code within brackets is executing. Once the code completes, the lock is released for another thread to utilize.   In my case, I only need to execute the inner code once to initialize my static class. So within the brackets I have a check on a public variable to prevent it from being initialized again.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16  | Next Page >