Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 75/527 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Why after each restart, my local .NET sites take time to load for the first time?

    - by Saeed Neamati
    I'm developing sites based on .NET platform. I usually deploy these sites on my local IIS, so that I can test them and see their functionality before going live. However, each time I restart windows, it seems that sites take a long time to run for the first time. I know about JIT and I'm also aware of this question, but it doesn't answer my question. Does JIT happens every time you restart windows? Is it related to creation of w3wp.exe process? Why sites are so slow for the first request after each restart?

    Read the article

  • Webscale is all about sharding and its coming to SQL Azure

    - by simonsabin
    There are many that joke about developers always talking about webscale and needing to shard to be able to scale. In reality many systems, if not most, don’t need to be able to scale to numerous nodes because todays processing is so powerful. However in the cloud world where you don’t have 1 big box you have many little ones (instances) you need some way of sharding/federating/distributing data and load. I’ve mentioned before of a PDC presentation on whats coming in SQL Azure, well they’ve put some...(read more)

    Read the article

  • Improving Comparison Operators and Window Functions

    It is dangerous to assume that your data is sound. SQL already has intrinsic ways to cope with missing, or unknown data in its comparison predicate operators, or Theta operators. Can SQL be more effective in the way it deals with data quality? Joe Celko describes how the SQL Standard could soon evolve to deal with data in ways that allow aggregation and windowing in cases where the data quality is less than perfect

    Read the article

  • Object pools for efficient resource management

    - by GameDevEnthusiast
    How can I avoid using default new() to create each object? My previous demo had very unpleasant framerate hiccups during dynamic memory allocations (usually, when arrays are resized), and creating lots of small objects which often contain one pointer to some DirectX resource seems like an awful lot of waste. I'm thinking about: Creating a master look-up table to refer to objects by handles (for safety & ease of serialization), much like EntityList in source engine Creating a templated object pool, which will store items contiguously (more cache-friendly, fast iteration, etc.) and the stored elements will be accessed (by external systems) via the global lookup table. The object pool will use the swap-with-last trick for fast removal (it will invoke the object's ~destructor first) and will update the corresponding indices in the global table accordingly (when growing/shrinking/moving elements). The elements will be copied via plain memcpy(). Is it a good idea? Will it be safe to store objects of non-POD types (e.g. pointers, vtable) in such containers? Related post: Dynamic Memory Allocation and Memory Management

    Read the article

  • What are the Crappy Code Games - Tips on how to win?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win Tips on how to win Each test has some different aspect that will define how you win. In this post we will give you some tips on how to try and win. As a starter why not watch some of the sessions from previous SQLBits Storage sessions Sessions on IO The background Who can enter? What are the challenges? What are the prizes? Why should...(read more)

    Read the article

  • Using JDBC to asynchronously read large Oracle table

    - by Ben George
    What strategies can be used to read every row in a large Oracle table, only once, but as fast as possible with JDBC & Java ? Consider that each row has non-trivial amounts of data (30 columns, including large text in some columns). Some strategies I can think of are: Single thread and read table. (Too slow, but listed for clarity) Read the id's into ConcurrentLinkedQueue, use threads to consume queue and query by id in batches. Read id's into a JMS queue, use workers to consume queue and query by id in batches. What other strategies could be used ? For the purpose of this question assume processing of rows to be free.

    Read the article

  • How to avoid the exception “Substitution controls cannot be used in cached User Controls or cached M

    - by DigiMortal
    Recently I wrote example about using user controls with donut caching. Because cache substitutions are not allowed inside partially cached controls you may get the error Substitution controls cannot be used in cached User Controls or cached Master Pages when breaking this rule. In this posting I will introduce some strategies that help to avoid this error. How Substitution control checks its location? Substitution control uses the following check in its OnPreRender method. protected internal override void OnPreRender(EventArgs e) {     base.OnPreRender(e);     for (Control control = this.Parent; control != null;          control = control.Parent)     {         if (control is BasePartialCachingControl)         {             throw new HttpException(SR.GetString("Substitution_CannotBeInCachedControl"));         }     } } It traverses all the control tree up to top from its parent to find at least one control that is partially cached. If such control is found then exception is thrown. Reusing the functionality If you want to do something by yourself if your control may cause exception mentioned before you can use the same code. I modified the previously shown code to be method that can be easily moved to user controls base class if you have some. If you don’t you can use it in controls where you need this check. protected bool IsInsidePartialCachingControl() {     for (Control control = Parent; control != null;         control = control.Parent)         if (control is BasePartialCachingControl)             return true;       return false; } Now it is up to you how to handle the situation where your control with substitutions is child of some partially cache control. You can add here also some debug level output so you can see exactly what controls in control hierarchy are cached and cause problems.

    Read the article

  • Optimized algorithm for line-sphere intersection in GLSL

    - by fernacolo
    Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way: // The line passes through p1 and p2: vec3 p1 = (...); vec3 p2 = (...); // Sphere center is p3, radius is r: vec3 p3 = (...); float r = ...; float x1 = p1.x; float y1 = p1.y; float z1 = p1.z; float x2 = p2.x; float y2 = p2.y; float z2 = p2.z; float x3 = p3.x; float y3 = p3.y; float z3 = p3.z; float dx = x2 - x1; float dy = y2 - y1; float dz = z2 - z1; float a = dx*dx + dy*dy + dz*dz; float b = 2.0 * (dx * (x1 - x3) + dy * (y1 - y3) + dz * (z1 - z3)); float c = x3*x3 + y3*y3 + z3*z3 + x1*x1 + y1*y1 + z1*z1 - 2.0 * (x3*x1 + y3*y1 + z3*z1) - r*r; float test = b*b - 4.0*a*c; if (test >= 0.0) { // Hit (according to Treebeard, "a fine hit"). float u = (-b - sqrt(test)) / (2.0 * a); vec3 hitp = p1 + u * (p2 - p1); // Now use hitp. } It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways: Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot!

    Read the article

  • Openbox overhead is similar to that with gnome-panel

    - by drN
    I just installed openbox via sudo apt-get install openbox. It already has obconf, btw. I noticed that when I logged into my openbox session instead of the one I usually use (gnome-panel and NOT Ubuntu 2D or one of the high overhead environments) and checked via htop, I found that a similar amount of RAM was being occupied (~600 MB or so) with openbox or gnome-panel. What gives? Openbox looks lighter but it certainly isn't any different. Obviously the same daemons etc would run in both environments as they share the same folders. Is gnome-panel as good as openbox then?

    Read the article

  • SCOM, 90 Days In, I

    - by merrillaldrich
    At my office we’re about 90 days into our implementation of System Center Operations Manager for Windows Server and SQL Server monitoring. All in all it’s been a good experience, and I’m really excited to have access to this tool. I’ve logged a fair number of years as a DBA on products like Idera’s SQL Diagnostic Manager and Quest Spotlight on SQL Server Enterprise (and “roll-your-own” solutions) in smaller environments, and liked them, but they always, in my experience, struggled with really large...(read more)

    Read the article

  • My laptop hangs a lot

    - by Salahuddin
    My laptop is 1G ram and 120G hard-disk 1.68HZ processor , I've both Ubuntu and Windows7 on my machine For the last few days, my laptop, running Ubuntu 11.10 has been hanging a lot. A lot of the time, the touchpad won't work for unknown reasons, and I have to connect a USB mouse to be able to move the pointer on the screen. When browsing the web, the browser hangs a lot, and I have to restart the laptop to refresh it. I know that there is a hotkey to force Ubuntu to refresh when it hangs, but it doesn't work here.These problems happens only on ubuntu not on windows 7... How do I make Ubuntu light and fast? Help me, please.

    Read the article

  • System Slow After Uprading Ubuntu

    - by Aragon N
    i have an ubuntu network machine which has release of 10.04.1 LTS Lucid. on this system i have apache, postgresql and django. for some app. development i have to install php and php-curl... due to being on network, i have exported wmvare machine to internet and firstly i have upgraded system and then install php5 packages on it. After all replacing it with its old place, i have considered that the new system query is some slow according to another. Old system query time : 140 ms New system query time : 9.11 s i have checked /etc/network interface and it seems there is no problem. i have checked /etc/resolv.conf and it seems ok i have checked /etc/nsswitch.conf and only host section is different from old one which old system has hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 and then i have checked time host -t A services.myapp.com and i got real 0m0.355s user 0m0.010s sys 0m0.020s and now what can i have to check for boosting my system as before?

    Read the article

  • Make OpenGL game perform better

    - by Csabi
    I have programmed an OpenGL game which just contains one F1 car and a track. It is very simple and only uses around of 10'000 - 20'000 triangles. It should run on any PC but it won't, it needs a really good graphics-card to run at a decent framerate. Can you write some methods or links to sites which would help me make my scene/game more efective? my game can be downloaded from here or directly from here

    Read the article

  • Demantra Implementation Tip Windows and Unix or Linux

    - by user702295
    Hello!  Are you implementing using a third party or consulting resources?   Recently we have seen some cases where customers no longer have a windows installation.  After the initial install and configuration, once the instance has gone live, the windows install is either deleted or most likely no longer with the customer as the same was installed on the implementers' laptop to start with. As a result when support comes back requesting the customer to apply a patch and/or upgrade they do not have a windows installation.  This has started happening after Oracle Demantra gave them the option to configure the engine on Unix.  Workaround: It is advisable that the customer keep their Windows installation intact for further patching and/or upgrade.  It is aslo possible that the implementer had installed Demantra on his Windows box and you do not have access to it any more.  It is possible that with the web and engine on Unix, and the silent installer having downloaded all the executable for Business Modeler, to work on the User's client machine, you may no longer need the windows install. I have not tested the above 

    Read the article

  • EPM 11.1.2.1 - Smartview client and HFM office provider

    - by user809526
    If your connection to the smartview provider is very slow, because the login part takes a long time (user directory slowness, ...), consider adding on the desktop side a Windows parameter: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\InternetSettings\ ReceiveTimeout 300000 to avoid being prompted over and over again for username/password This is an addition to the support doc id: "Smart View 11.1.2.1 Keeps Prompting For Username And Password For Financial Management Provider [ID 1353294.1]"

    Read the article

  • Javascript Canvas Drawing Efficiency

    - by jujumbura
    I have just recently started some experiments with game development in Javascript/HTML5, and so far it has been going pretty well. I have a simple test scene running with some basic input handling, and a hundred-ish drawImage() calls with a few transforms. This all runs great on Chrome, but unfortunately, it already chugs on Firefox. I am using a very large canvas ( 1920 x 1080 ), but it doesn't seem like I should be hitting my limit already. So on that note, I was hoping to ask a few questions: 1) What exactly is done on the CPU vs. the GPU in terms of canvas and drawImage()? I'm afraid the answer is probably "it depends on the browser", but can anybody give me some rules of thumb? I naively imagined that each drawImage call results in a textured quad on the GPU with the canvas effectively being a render target, but I'm wondering if I'm pretty far off base there... 2) I have seen posts here and there with people saying not to use the translate(), rotate(), scale() functions when drawing on the canvas. Am I adding a lot of overhead just by adding a translate() call, as opposed to passing in the x,y to drawImage()? Some people suggest using "transate3d", etc., which are CSS properties, but I'm not sure how to use them within a scene. Can they be used for animated sprites within a single canvas? 3) I have also seen a lot of posts with people mentioning that pre-building canvases and then re-using them is a lot faster than issuing all the individual draw calls again. I am guessing that my background should definitely be pre-built into a canvas, but how far should I take this? Should I maintain an individual canvas for each sprite, to cache all static image data when not animating? Thank you much for your advice!

    Read the article

  • What is Rainbow (not the CMS)

    - by Jeremy Thompson
    I was reading this excellent blog article regarding speeding up the badge page and in the last comment the author @waffles (a.k.a Sam Saffron) mentions these tools: dapper and a bunch of custom helpers like rainbow, sql builder etc Dapper and sql builder was easy to look up but rainbow keeps pointing me to a CMS, can someone please point me to the real source? Thanks. Obviously the architecture of these [SE] sites is uber cool and ultra fast so no comments on that thanks.

    Read the article

  • Java JRE 7 Automatic Upgrade and Demantra Requirements - Action Required

    - by user702295
    The following applies to ALL Demantra, EBS and Demantra Oracle Integrations: All EBS desktop administrators must disable JRE Auto-Update for their end-users immediately. See this externally-published article:     URGENT BULLETIN: Disable JRE Auto-Update for All E-Business Suite End-Users     https://blogs.oracle.com/stevenChan/entry/bulletin_disable_jre_auto_update Why is this required? If you have Auto-Update enabled, your JRE 1.6 version will be updated to JRE 7.     This may happen as early as July 3, 2012.     This will definitely happen after Sept. 7, 2012, after the release of 1.6.0_35 (6u35).  Oracle Forms is not compatible with JRE 7 yet.  JRE 7 has not been certified with Oracle E-Business Suite yet. Oracle E-Business Suite functionality based on Forms -- e.g. Financials -- will stop working if you upgrade to JRE 7. Related News Java 1.6.0_33 is certified with Oracle E-Business Suite.  See this externally-published article:     Java JRE 1.6.0_33 Certified with Oracle E-Business Suite     https://blogs.oracle.com/stevenChan/entry/jre_1_6_0_33

    Read the article

  • Un-used Indexes on MDP_MATRIX Consuming Resources

    - by user702295
    Disable un-used Indexes: As much as it is recommended to create relevant indexes, it is advised not to have too many indexes on the mdp_matrix table.  Too many indexes will cause long waits on the table as indexes needs to get updated every time the table is updated.  There are many seeded indexes on mdp_matrix, every out of the box data model level has an index on the matrix table.  If a level is unused in the specific data model of the implementation, it is advisable to disable that index.  If the customer is not sure if and how indexes are utilized, the DBA can monitor all indexes.  After a few cycles of operation, the DBA should go over that list and see which indexes have not been used.  Consider disabling them. There are scripts on the net to monitor indexes or use the monitoring usage clause in the alter index statement.

    Read the article

  • Different ways to pass Textures into HLSL shaders

    - by codymanix
    The GraphicsDevice class of xna 4 has the properties Textures and VertexTextures. What is the exact difference? I don't really understand what MSDN tells me about this. I usually use Effect parameters to pass textures to my HLSL shaders. What are the differences between these methods, which is faster? My Scenario: I am working on a minecraft like game, which means lots of separate DrawPrimitives calls and change current Texture often since I have lots of different block types. Since I use an Octtree to organize the world, I cannot easily sort by texture.

    Read the article

  • Drawing different per-pixel data on the screen

    - by Amir Eldor
    I want to draw different per-pixel data on the screen, where each pixel has a specific value according to my needs. An example may be a random noise pattern where each pixel is randomly generated. I'm not sure what is the correct and fastest way to do this. Locking a texture/surface and manipulating the raw pixel data? How is this done in modern graphics programming? I'm currently trying to do this in Pygame but realized I will face the same problem if I go for C/SDL or OpenGL/DirectX.

    Read the article

  • What are the Crappy Code Games - Why should I attend?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win Why should I attend? The crappy code games isn’t all about winning, its also about having a good time and learning about SQL Server. Even if you don’t want to enter the competition I think its valuable for you to come along to the heats and the final in Brighton as a chance to meat SQL Server bods and also to learn about SQL Server and...(read more)

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • Updates to the Demantra Partial Schema Exporter Tool, Patch 13930627, are Available.

    - by user702295
    Hello!  Updates to the Demantra Partial Schema Exporter Tool, Patch 13930627, are Available. This is an updated re-release of the generic Partial Schema Exporter Tool.  The generic patch is for 7.3.1.x and 12.2.x. TABLE_REORG was introduced in 7.3.1.3 12.2.0.  Therefore for 7.3.1.x the schema must be at 7.3.1.3 or above. This is build 3 of the patch. It contains fixes for the following bugs - BUG 17495971 - DEMANTRA 12.2 - CUMULATIVE HISTORY NOT CORRECT   It now only uses DATA_PUMP COMPRESSION only on Enterprise Edition for 11g and and up. - Bug 17452153 - 1OFF:16086475:TRYING TO FILTER DROP DOWN IN A METHOD CALL USING MORE THAN 1 ATTR   It now builds GL level filters with and without the GL id column where applicable. These bugs are also fixed in 7.3.1.6 and 12.2.3.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >