Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 86/1148 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

  • getting userbase vote average and individual user's vote in the same query?

    - by Andrew Heath
    Here goes: T1 [id] [desc] 1 lovely 2 ugly 3 slender T2 [id] [userid] [vote] 1 1 3 1 2 5 1 3 2 2 1 1 2 2 4 2 3 4 In one query (if possible) I'd like to return: T1.id, T1.desc, AVG(T2.vote), T2.vote (for user viewing the page) I can get the first 3 items with: SELECT T1.id, T1.desc, AVG(T2.vote) FROM T1 LEFT JOIN T2 ON T1.id=T2.id GROUP BY T1.id and I can get the first, second, and fourth items with: SELECT T1.id, T1.desc, T2.vote FROM T1 LEFT JOIN T2 ON T1.id=T2.id WHERE T2.userid='1' GROUP BY T1.id but I'm at a loss as to how to get all four items in one query. I tried inserting a select as the fourth term: SELECT T1.id, T1.desc, AVG(T2.vote), (SELECT T2.vote FROM T2 WHERE T2.userid='1') AS userVote etc etc but I get an error that the select returns more than one row... Help? My reason for wanting to do this in one query instead of two is that I want to be able to sort the data within MySQL rather than one it's been split into a number of arrays.

    Read the article

  • Access is refusing to run an query with linked table?

    - by Mahmoud
    Hey all i have 3 tables each as follow cash_credit Bank_Name-------in_date-------Com_Id---Amount America Bank 15/05/2010 1 200 HSBC 17/05/2010 3 500 Cheque_credit Bank_Name-----Cheque_Number-----in_date-------Com_Id---Amount America Bank 74835435-5435 15/05/2010 2 600 HSBC 41415454-2851 17/05/2010 5 100 Companies com_id----Com_Name 1 Ebay 2 Google 3 Facebook 4 Amazon Companies table is a linked table when i tried to create an query as follow SELECT cash_credit.Amount, Companies.Com_Name, cheque_credit.Amount FROM cheque_credit INNER JOIN (cash_credit INNER JOIN Companies ON cash_credit.com_id = Companies.com_id) ON cheque_credit.com_id = Companies.com_id; I get an error saying that my inner Join is incorrectly, this query was created using Access 2007 query design the error is Type mismatch in expression then i thought it might be the inner join so i tried Left Join and i get an error that this method is not used JOIN expression is not supported I am confused on where is the problem that is causing all this

    Read the article

  • MVC more specified models should be populated by more precise query too?

    - by KevinUK
    If you have a Car model with 20 or so properties (and several table joins) for a carDetail page then your LINQ to SQL query will be quite large. If you have a carListing page which uses under 5 properties (all from 1 table) then you use a CarSummary model. Should the CarSummary model be populated using the same query as the Car model? Or should you use a separate LINQ to SQL query which would be more precise? I am just thinking of performance but LINQ uses lazy loading anyway so I am wondering if this is an issue or not.

    Read the article

  • How to run a progress-bar through an insert query?

    - by Gold
    I have this insert query: try { Cmd.CommandText = @"INSERT INTO BarcodTbl SELECT * FROM [Text;DATABASE=" + PathI + @"\].[Tmp.txt];"; Cmd.ExecuteNonQuery(); Cmd.Dispose(); } catch (Exception ex) { MessageBox.Show(ex.Message); } I have two questions: How can I run a progress-bar from the beginning to the end of the insert? If there is an error, I got the error exception and the action will stop - the query stops and the BarcodTbl is empty. How I can see the error and allow the query to continue filling the table?

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

  • ADNOC talks about 50x increase in performance

    - by KLaker
    If you are still wondering about how Exadata can revolutionise your business then I would recommend watching this great video which was recorded at this year's OpenWorld. First a little background...The Abu Dhabi National Oil Company for Distribution (ADNOC) is an integrated energy company that was founded in 1973. ADNOC Distribution markets and distributes petroleum products and services within the United Arab Emirates and internationally. As one of the largest and most innovative government-owned petroleum companies in the Arab Gulf, ADNOC Distribution is renowned and respected for the exceptional quality and reliability of its products and services. Its five corporate divisions include more than 200 filling stations (a number that is growing at 8% annually), more than 150 convenience stores, 10 vehicle inspection stations, as well as wholesale and retail sales of bulk fuel, gas, oil, diesel, and lubricants. ADNOC selected Oracle Exadata Database Machine after extensive research because it provided them with a single platform that can run mixed workloads in a single unified machine: "We chose Oracle Exadata Database Machine because it.offered a fully integrated and highly engineered system that was ready to deploy. With our infrastructure running all the same technology, we can operate any type of Oracle Database without restrictions and be prepared for business growth," said Ali Abdul Aziz Al-Ali, IT division manager, ADNOC Distribution. ".....we could consolidate our transaction processing and business intelligence onto one platform. Competing solutions are just not capable of doing that." - Awad Ahmed Ali El-Sidiq, Senior Database Administrator, ADNOC Distribution In this new video Awad Ahmen Ali El Sidddig, Senior DBA at ADNOC, talks about the impact that Exadata has had on his team and the whole business. ADNOC is using our engineered systems to drive and manage all their workloads: from transaction systems to payments system to data warehouse to BI environment. A true Disk-to-Dashboard revolution using Engineered Systems. This engineered approach is delivering 50x improvement in performance with one queries running 100x faster! The IT has even revolutionised some of their data warehouse related processes with the help of Exadata and now jobs that were taking over 4 hours now run in a few minutes.  To watch the video click on the image below which will take you to our Oracle YouTube page: (if the above link does not work, click here: http://www.youtube.com/watch?v=zcRpxc6u5Ic) Now that queries are running 100x faster and jobs are completing in minutes not hours, what is next for the IT team at ADNOC? Like many of our customers ADNOC is now looking to take advantage of big data to help them better align their business operations with customer behaviour and customer insights. To help deliver this next level of insight the IT team is looking at the new features in Oracle Database 12c such as the new in-memory feature to deliver even more performance gains.  The great news is that Awad Ahmen Ali El Sidddig was awarded DBA of the Year - EMEA within our Data Warehouse Global Leaders programme and you can see the badge for this award pop-up at the start of video. Well done to everyone at ADNOC and thanks for spending the time with us at OOW to create this great video.

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using FIRST _VALUE and LAST_VALUE

    - by pinaldave
    Last week we asked a puzzle SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY . This puzzle got very interesting participation. The details of the winner is listed here. In this puzzle we received two very important feedback. This puzzle cleared the concepts of First_Value and Last_Value to the participants. As this was based on SQL Server 2012 many could not participate it as they have yet not installed SQL Server 2012. I really appreciate the feedback of user and decided to come up something as fun and helps learn new feature of SQL Server 2012. Please read yesterday’s blog post SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 before continuing this puzzle as it is based on yesterday’s post. Yesterday I ran following query which uses functions LEAD and LAG. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Puzzle: Now use T-SQL Self Join where same table is joined to itself and get the same result without using LEAD or LAG functions. Hint: Introduction to JOINs – Basic of JOINs Self Join A new analytic functions in SQL Server Denali CTP3 – LEAD() and LAG() Rules Leave a comment with your detailed answer by Nov 21's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • JIT compiler for C, C++, and the likes

    - by Ebrahim
    Is there any just-in-time compiler out there for compiled languages, such as C and C++? (The first names that come to mind are Clang and LLVM! But I don't think they currently support it.) Explanation: I think the software could benefit from runtime profiling feedback and aggressively optimized recompilation of hotspots at runtime, even for compiled-to-machine languages like C and C++. Profile-guided optimization does a similar job, but with the difference a JIT would be more flexible in different environments. In PGO you run your binary prior to releasing it. After you released it, it would use no environment/input feedbacks collected at runtime. So if the input pattern is changed, it is probe to performance penalty. But JIT works well even in that conditions. However I think it is controversial wether the JIT compiling performance benefit outweights its own overhead. Edit: Grammar

    Read the article

  • Enterprise Performance Management: Driving Management Excellence

    Extending operational excellence to management excellence is the new strategic imperative for organizations large and small, all around the world. Management Excellence is a strategy for organizations to differentiate from their competition, by being smarter, more agile and more aligned. Tune into this conversation with John Kopcke, Senior Vice President of Oracle’s Enterprise Performance Management Global Business Unit to learn how leading companies are integrating their management processes and using Oracle’s EPM System to achieve management excellence.

    Read the article

  • Advanced MySQL Replication - Improving Performance

    MySQL Replication can be made quite reliable and robust if the right tools are used to keep it running smoothly--but what if enormous loads on the primary server are overloading the slave server. Are there ways to speed up performance, so the slave can keep up?

    Read the article

  • SQL SERVER – How to Set Variable and Use Variable in SQLCMD Mode

    - by Pinal Dave
    Here is the question which I received the other day on SQLAuthority Facebook page. Social media is a wonderful thing and I love the active conversation between blog readers and myself – actually I think social media adds lots of human factor to any conversation. Here is the question - “I am using sqlcmd in SSMS – I am not sure how to declare variable and pass it, for example I have a database and it has table, how can I make the table variable dynamic and pass different value everytime?” Fantastic question, and here is its very simple answer. First of all, enable sqlcmd mode in SQL Server Management Studio as described in following image. Now in query editor type following SQL. :SETVAR DatabaseName “AdventureWorks2012″ :SETVAR SchemaName “Person” :SETVAR TableName “EmailAddress“ USE $(DatabaseName); SELECT * FROM $(SchemaName).$(TableName); Note that I have set the value of the database, schema and table as a sqlcmd variable and I am executing the query using the same parameters. Well, that was it, sqlcmd is a very simple language to master and it also aids in doing various tasks easily. If you have any other sqlcmd tips, please leave a comment and I will publish it with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd

    Read the article

  • Getting Query Parameters in Javascript

    - by PhubarBaz
    I find myself needing to get query parameters that are passed into a web app on the URL quite often. At first I wrote a function that creates an associative array (aka object) with all of the parameters as keys and returns it. But then I was looking at the revealing module pattern, a nice javascript design pattern designed to hide private functions, and came up with a way to do this without even calling a function. What I came up with was this nice little object that automatically initializes itself into the same associative array that the function call did previously. // Creates associative array (object) of query params var QueryParameters = (function() {     var result = {};     if (window.location.search)     {         // split up the query string and store in an associative array         var params = window.location.search.slice(1).split("&");         for (var i = 0; i < params.length; i++)         {             var tmp = params[i].split("=");             result[tmp[0]] = unescape(tmp[1]);         }     }     return result; }()); Now all you have to do to get the query parameters is just reference them from the QueryParameters object. There is no need to create a new object or call any function to initialize it. var debug = (QueryParameters.debug === "true"); or if (QueryParameters["debug"]) doSomeDebugging(); or loop through all of the parameters. for (var param in QueryParameters) var value = QueryParameters[param]; Hope you find this object useful.

    Read the article

  • Performance Tuning Tips for Apache

    Apache is one of the most successful open source projects of our times. A big advantage of this popularity is that over the years people have spent a great deal of time fine tuning the software for better performance. Read on to learn more.

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Get File Statistics Using fn_virtualfilestats

    - by pinaldave
    Quite often when I am staring at my SSMS I wonder what is going on under the hood in my SQL Server. I often want to know which database is very busy and which database is bit slow because of IO issue. Sometime, I think at the file level as well. I want to know which MDF or NDF is busiest and doing most of the work. Following query gets the same results very quickly. SELECT DB_NAME(vfs.DbId) DatabaseName, mf.name, mf.physical_name, vfs.BytesRead, vfs.BytesWritten, vfs.IoStallMS, vfs.IoStallReadMS, vfs.IoStallWriteMS, vfs.NumberReads, vfs.NumberWrites, (Size*8)/1024 Size_MB FROM ::fn_virtualfilestats(NULL,NULL) vfs INNER JOIN sys.master_files mf ON mf.database_id = vfs.DbId AND mf.FILE_ID = vfs.FileId GO When you run above query you will get many valuable information like what is the size of the file as well how many times the reads and writes are done for each file. It also displays the read/write data in bytes. Due to IO if there has been any stall (delay) in read or write, you can know that as well. I keep this handy but have not shared on blog earlier. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, T SQL, Technology Tagged: Statistics

    Read the article

  • SQLAuthority News – Online Webcast How to Identify Resource Bottlenecks – Wait Types and Queues

    - by pinaldave
    As all of you know I have been working a recently on the subject SQL Server Wait Statistics, the reason is since I have published book on this subject SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues [Amazon] | [Flipkart] | [Kindle], lots of question and answers I am encountering. When I was writing the book, I kept version 1 of the book in front of me. I wanted to write something which one can use right away. I wanted to create an primer for everybody who have not explored wait stats method of performance tuning. Well, the books have been very well received and in fact we ran out of huge stock 2 times in India so far and once in USA during SQLPASS. I have received so many questions on this subject that I feel I can write one more book of the same size. I have been asked if I can create videos which can go along with this book. Personally I am working with SQL Server 2012 CTP3 and there are so many new wait types, I feel the subject of wait stats is going to be very very crucial in next version of SQL Server. If you have not started learning about this subject, I suggest you at least start exploring this right now. Learn how to begin on this subject atleast as when the next version comes in, you know how to read DMVs. I will be presenting on the same subject of performance tuning by wait stats in webcast embarcadero SQL Server Community Webinar. Here are few topics which we will be covering during the webinar. Beginning with SQL Wait Stats Understanding various aspect of SQL Wait Stats Understanding Query Life Cycle Identifying three TOP wait Stats Resolution of the common 3 wait types and queues Details of the webcast: How to Identify Resource Bottlenecks – Wait Types and Queues Date and Time: Wednesday, November 2, 11:00 AM PDT Registration Link I thank embarcadero for organizing opportunity for me to share my experience on subject of wait stats and connecting me with community to further take this subject to next level. One more interesting thing, I will ask one question at the end of the webinar and I will be giving away 5 copy of my SQL Wait Stats print book to first five correct answers. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

    - by Brian
    Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark. The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB. This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour. The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11. Performance Landscape Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch. PeopleSoft Financials Benchmark, Version 9.1 Solution Under Test Batch (min) SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92 Results from PeopleSoft Financials Benchmark 9.0. PeopleSoft Financials Benchmark, Version 9.0 Solution Under Test Batch (min) Batch with Online (min) SPARC Enterprise M4000 (Web/App) SPARC Enterprise M5000 (DB) 33.09 34.72 SPARC T3-1 (Web/App) SPARC Enterprise M5000 (DB) 35.82 37.01 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 128 GB memory Storage Configuration: 1 x Sun Storage F5100 Flash Array (for database and redo logs) 2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup) Software Configuration: Oracle Solaris 11 11/11 SRU 7.5 Oracle Database 11g Release 2 (11.2.0.3) PeopleSoft Financials 9.1 Feature Pack 2 PeopleSoft Supply Chain Management 9.1 Feature Pack 2 PeopleSoft PeopleTools 8.52 latest patch - 8.52.03 Oracle WebLogic Server 10.3.5 Java Platform, Standard Edition Development Kit 6 Update 32 Benchmark Description The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN PeopleSoft Financial Management oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #026 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Retrieve SQL Server Installation Date Time

    - by pinaldave
    I have been asked this question number of times and my answer always have been – search online and you will find the answer. Every single time when someone has followed my answer – they have found accurate answer in first few clicks. However increasingly this question getting very popular so I have decided to answer this question here. I usually prefer to create my own T-SQL script but in today’s case, I have taken the script from web. I have seen this script at so many places I do not know who is original creator so not sure who should get credit for the same. Question: How to retrieve SQL Server Installation date? Answer: Run following query and it will give you date of SQL Server Installation. SELECT create_date FROM sys.server_principals WHERE sid = 0x010100000000000512000000 Question: I have installed SQL Server Evaluation version how do I know what is the expiry date for it? Answer: SQL Server evaluation period is for 180 days. The expiration date is always 180 days from the initial installation. Following query will give an expiration date of evaluation version. -- Evaluation Version Expire Date SELECT create_date AS InstallationDate, DATEADD(DD, 180, create_date) AS 'Expiry Date' FROM sys.server_principals WHERE sid = 0x010100000000000512000000 GO I believe there is a way to do the same using registry but I have not explored it personally. Now as I said earlier there are many different blog posts on this subject. Let me list a few which I really enjoyed to read personally as they shared few more insights over this subject. Retrieving SQL Server 2012 Evaluation Period Expiry Date How to find the Installation Date for an Evaluation Edition of SQL Server Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Right Aligning Numerics in SQL Server Management Studio (SSMS)

    - by pinaldave
    SQL Server Management Studio is my most favorite tool and the comfort it provides to user is sometime very amazing. Recently I was retrieving numeric data in SSMS and I found it is very difficult to read them as they were all right aligned. Please pay attention to following image, you will notice that it is not easier to read the digits as we are used to read the numbers which are right aligned. I immediately thought before I go for any other tricks I should check the query properties. I right clicked on query properties and I found following option. I checked option Right align numeric values and it just worked fine. Do you have any other similar tricks which do you practice often. I prefer to also include column headers in the result set as it gives me proper perspective of which column I have selected. Sometime little tips like this helps a lot in productivity, I encourage you to share your tips. I will publish it with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • SQL SERVER – TechEd India 2012 – Content, Speakers and a Lots of Fun

    - by pinaldave
    TechEd is one event which every developers and IT professionals are looking forward to attend. It is opportunity of life time and no matter how many time one gets chance to engage with it, it is never enough. I still remember every single moment of every TechEd I have attended so far. We are less than 100 hours away from TechEd India 2012 event.This event is the one must attend event for every Technology Enthusiast. Fourth time in the row I am going to attend this event and I am equally excited as the first time of the event. There are going to be two very solid SQL Server track this time and I will be attending end of the end both the tracks. Here is my view on each of the 10 sessions. Each session is carefully crafted and leading exeprts from industry will present it. Day 1, March 21, 2012 T-SQL Rediscovered with SQL Server 2012 – This session is going to bring some of the lesser known enhancements that were brought with SQL Server 2012. When I learned that Jacob Sebastian is going to do this session my reaction to this is DEMO, DEMO and DEMO! Jacob spends hours and hours of his time preparing his session and this will be one of those session that I am confident will be delivered over and over through out the next many events. Catapult your data with SQL Server 2012 Integration Services – Praveen is expert story teller and one of the wizard when it is about SQL Server and business intelligence. He is surely going to mesmerize you with some interesting insights on SSIS performance too. Processing Big Data with SQL Server 2012 and Hadoop – There are three sessions on Big Data at TechEd India 2012. Stephen is going to deliver one of the session. Watching Stephen present is always joy and quite entertaining. He shares knowledge with his typical humor which captures ones attention. I wrote about what is BIG DATA in a blog post. SQL Server Misconceptions and Resolutions – I will be presenting this Session along with Vinod Kumar. READ MORE HERE. Securing with ContainedDB in SQL Server 2012 – Pranab is expert when it is about SQL Server and Security. I have seen him presenting and he is indeed very pleasant to watch. A dry subject like security, he makes it much lively. A Contained Database is a database which contains all the necessary settings and metadata, making database easily portable to another server. This database will contain all the necessary details and will not have to depend on any server where it is installed for anything. You can take this database and move it to another server without having any worries. Day 3, March 23, 2012 Peeling SQL Server like an Onion: Internals Demystified – Vinod Kumar has been writing about this extensively on his other blog post. In recent conversation he suggested that he will be creating very exclusive content for this presentation. I know Vinod for long time and have worked with him along many community activities. I am going to pay special attention to the details. I know Vinod has few give-away planned now for attending the session now only if he shares with us. Speed Up – Parallel Processes and unparalleled PerformancePerformance tuning is my favorite subject. I will be discussing effect of parallelism on performance in this session. Here me out, there will be lots of quiz questions during this session and if you get the answers correct – you can win some really cool goodies – I Promise! READ MORE HERE. Keep your database available – AlwaysOn – Balmukund is like an army man. He is always ready to show and prove that he has coolest toys in terms of SQL Server and he knows how to keep them running AlwaysON. Availability groups, Listener, Clustering, Failover, Read-Only replica etc all will be demo’ed in this session. This is really heavy but very interesting content not to be missed. Lesser known facts about SQL Server Backup and Restore – Amit Banerjee – this name is known internationally for solving SQL Server problems in 140 characters. He has already blogged about this and this topic is going to be interesting. A successful restore strategy for applications is as good as their last good known backup. I have few difficult questions to ask to Amit and I am very sure that his unique style will entertain people. By the way, his one of the slide may give few in audience a funny heart attack. Top 5 reasons why you want SQL Server 2012 BI – Praveen plans to take a tour of some of the BI enhancements introduced in the new version. Business Insights with SQL Server is a critical building block and this version of SQL Server is no exception. For the matter of the fact, when I saw the demos he was going to show during this session, I felt like that I wish I can set up all of this on my machine. If you miss this session – you will miss one of the most informative session of the day. Also TechEd India 2012 has a Live streaming of some content and this can be watched here. The TechEd Team is planning to have some really good exclusive content in this channel as well. If you spot me, just do not hesitate to come by me and introduce yourself, I want to remember you! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >