Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 247/594 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • samsung series 5 A6 ultrabook

    - by Én Cube
    i'm a fan of ubuntu, but for now i want my installation to be perfect for my new laptop, its a series 5 of samsung with A6 processor. Somebody suggest what to do? or mention any problems related to this? Im more concern of the drivers to increase its performance because i am to develop an android app. I need "i dont know" to make this unit run as fast as possible without any dangerous tradeoffs. Also i read some forum saying that their units are overheating. -_- somebody help me please. and also i want to switch to gnome 3. And last thing, is there any differences in performance if i use alongside, replace or custom in formatting?

    Read the article

  • Faking Display tree (Sprite) parent child relationships with rasters (BitmapData) in ActionScript 3

    - by Arthur Wulf White
    I am working with Rasters (bitmapData) and blliting (copypixels) in a 2d-game in actionscript 3. I like how you can move a sprite and it moves all it's children, and you can simultaneously move the children creating an interesting visual effect. I do not want to use rotation or scaling however cause I do not know how that can be done without hampering with performance. So I'm not simulating Sprite parent-child behavior and sticking to the movement on the (x, y) axis. What I am planning to do is create a class called RasterContainer which extends bitmapData that has a vector of children of type Raster(extending bitmapData), now I am planning to implement recursive rendering in RasterContainer, that basically copyPixels every child, only changing their (x, y) offset to reflect their parent's offset. My question is, has this been implemented in an existing framework? Is this a good plan? Do I expect a serious performance hit for using recursive methods this way?

    Read the article

  • My Oracle Support Accreditation for Database and Enterprise Manager

    - by A. G.
    Have you actively used My Oracle Support for 6-9 months? Take your expertise to the next level—become accredited! By completing the accreditation learning series, you can increase your proficiency with My Oracle Support’s core functions and build skills to help you leverage Oracle solutions, tools, and knowledge that enable productivity. Accreditation learning paths are available for Oracle Database and Enterprise Manager, which focus on product-specific best practices, recommendations, and tool enablement—up leveling your capabilities with these Oracle products. Course topics include:   Oracle Database Staying informed  Install Patching Upgrade Performance Security Scalability Enterprise Manager Staying informed  Supportability Certification Patching Upgrade Performance Diagnostic Tools Troubleshooting Visit the My Oracle Support Accreditation Index and get started with the Level 1 My Oracle Support Accreditation path and product-specific Level 2 learning paths for Oracle Database and Enterprise Manager.

    Read the article

  • What are the differences between Bigloo and ECL from an embedding standpoint? [migrated]

    - by Pubby
    I've been looking to embed Lisp in some C++ code. Two options I'm interested in is Bigloo Scheme and ECL (Common Lisp). Reading through the docs they seem to support a very similar feature set. Obviously Bigloo is Scheme and ECL is CLisp, but what other differences do they have? In particular I'm interested in the following criteria: Ease of embedding (for C++, not just C). I don't want to write a bunch of boilerplate. Performance. Bigloo is performance based and has many compiler optimization options, although I can't find anything comparable for ECL. Style of coding. This one is more for Bigloo - is it more functional than ECL? I'm targeting this question towards someone who has used both.

    Read the article

  • how to avoid or minimise use of check/conditional statement?

    - by Muneeb Nasir
    I have scenario, where i got stream and i need to check for some value. if i got my any new value i have to store it in any of data structure. well it seems very easy, i can place conditional statement if-else or can use contain method of set/map to check either received is new or not. but the problem is checking will effect my application performance, in stream i'll receive hundreds for value in second, if i start checking each and every value i received than for sure it effect performance. Any body can suggest me any mechanism or algorithm that solve my issue. either by bypassing checks or atleast minimize them?

    Read the article

  • NoSQL For The Rest Of Us

    No one would blame you for strictly associating NoSQL with performance. Most of the back and forth about NoSQL - an umbrella term given for non-relational storage mechanisms - has squarely put the focus on performance, sites with massive traffic, and server farms. It’s an interesting conversation, but one that risks alienating NoSQL from the majority of developers. The Problem Does NoSQL provide us simple developers with any tangible benefit? As a matter of fact, it can - one as significant...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • On Comparing Tables in SQL Server

    How do you compare two SQL tables? Every SQL Developer or DBA knows the answer, which is 'it depends'. It is not just the size of the table or the type of data in it but what you want to achieve. Phil Factor sets about to cover the basics and point out some snags and advantages to the various techniques. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • Yes to NoSQL

    There seems to be some backlash building up against NoSQL with posts like Ted Dziuba I Can't Wait for NoSQL to Die or Dennis Forbes The Impact of SSDs on Database Performance and the Performance Paradox of Data Explodification (aka Fighting the NoSQL mindset). These are interesting articles to read and yes RDBMSs are not going the way of the dodo yet (I even said that in The RDBMS is dead, which by the way, was written before NoSQL was coined, but I digress ). Nevertheless,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Setting Up and Running Summary Advisor on an Exalytics Machine (Oracle-by-Example)

    - by Saresh
    If you are running Oracle BI on an Exalytics machine, you can use Summary Advisor to identify the aggregates that will increase query performance. Summary Advisor intelligently recommends an optimal list of aggregate tables based on query patterns that will achieve maximum query performance gain while meeting specific resource constraints. Summary Advisor then generates an aggregate creation script that can be run to create the recommended aggregate tables. Aggregate tables reduce query times by storing precomputed results for queries that include rolled-up data. This tutorial covers steps to set up, configure, and run Summary Advisor on an Exalytics machine using TimesTen database as a target for storing aggregates. You can find the Oracle By Example (OBE) in the Oracle Learning Library (OLL). The content in OLL is available to all customers, partners, and employees.

    Read the article

  • Oracle Exadata X3 Launch Webcast

    - by Cinzia Mascanzoni
    Available on-demand, this webcast covers everything your partners need to know about Oracle’s next-generation database machine. They will learn how to improve performance by storing multiple databases in memory, lower power and cooling costs by 30%, and easily deploy a cloud-based database service. Exadata X3 combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Partners won’t want to miss this webcast. Invite them to watch today! View and share the replay.

    Read the article

  • Automatic Statistics Update Slows Down SQL Server 2005

    I have a database which has several tables that have very heavy write operations. These table are very large and some are over a hundred gigabytes. I noticed performance of this database is getting slower and after some investigation we suspect that the Auto Update Statistics function is causing a performance degradation. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • DOMDocument programming: a lot of little dilemmas, how to solve them?

    - by Peter Krauss
    I need elegance and performance: how to decide by the "best implementation" for each DOM algorithm that I face. This simple "DOMNodeList grouper" illustrate many little dilemmas: use iterator_to_array or "populate an array", when not all items need to be copied. use clone operator, cloneNode method or import method? use parentNode::method() or documentElement::method? (see here) first removeChild or first replaceChild, no avoids "side effects"? ... My position, today, is only "do an arbitrary choice and follow it in all implementations" (like a "Convention over configuration" principle)... But, there are another considerations? About performance, there are some article showing benchmarks? PS: this is a generic DOM question, any language (PHP, Javascript, Python, etc.) have the problem.

    Read the article

  • Counting product releases if you work on the backend/online services?

    - by stackoverflowuser2010
    I am trying to update my resume, and I would like to count the number of "product releases" that I was directly involved in with a company. It would seem to serve as a performance metric. The problem is that I was working on the backend of a very large distributed system, like along the lines of Hadoop or other huge database. We had regular 6-month major releases and other minor releases. My manager kept saying that "shipped" these releases, but "shipping" a product to me sounds like releasing single pieces of software, like Microsoft would ship Office 11 or something. Any ideas on "product releases" for backend service engineers, or any other type of performance metric?

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

  • What techniques can I use to render very large numbers of objects more efficiently in OpenGL?

    - by Luke
    You can think of my application as drawing a very large ball-and-stick diagram (or graph). At times, this graph can get very large, where the number of elements even outnumbers the pixels on the screen. Currently I am simply passing all of my textures (as GL_POINTS) and lines to the graphics card using VBO's. When the number of elements outnumbers the number of pixels, is this the most efficient way to do this? Or should I do some calculations on the CPU side before handing everything over to the GPU? If it matters, I do use GL_DEPTH_TEST and GL_ALPHA_TEST. I do some alpha blending, but probably not enough to make a huge performance difference. My scene can be static at times, but the user has control over a typical arc-ball camera and can pan, rotate, or zoom. It is during these operations that performance degradation is noticeable.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >