Search Results

Search found 17968 results on 719 pages for 'query tuning'.

Page 50/719 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • SQL SERVER – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    I receive a lot of emails every day. I try to answer each and every email and comments on Facebook and Twitter. I prefer communication on social media as this gives opportunities to others to read the questions and participate along with me. There is always some question which everyone likes to read and remember. Here is one of the questions which I received in email. I believe the same question will be there any many developers who are beginning with SQL Server. I decided to blog about it so everyone can read it and participate. “I am beginner in SQL Server. I have a very interesting situation and need your help. I am beginner to SQL Server and that is why I do not have access to the production server and I work entirely on the development server. The project I am working on is also in the infant stage as well. In product I had to create a multiple tables and every table had few columns. Later on I have written Stored Procedures using those tables. During a code review my manager has requested to change one of the column which I have used in the table. As per him the naming convention was not accurate. Now changing the columname in the table is not a big issue. I figured out that I can do it very quickly either using T-SQL script or SQL Server Management Studio. The real problem is that I have used this column in nearly 50+ stored procedure. This looks like a very mechanical task. I believe I can go and change it in nearly 50+ stored procedure but is there a better solution I can use. Someone suggested that I should just go ahead and find the text in system table and update it there. Is that safe solution? If not, what is your solution. In simple words, How to replace a column name in multiple stored procedure efficiently and quickly? Please help me here with keeping my experience and non-production server in mind.” Well, I found this question very interesting. Honestly I would have preferred if this question was asked on my social media handles (Facebook and Twitter) as I am very active there and quite often before I reach there other experts have already answered this question. Anyway I am now answering the same question on the blog so all of us can participate here and come up with an appropriate answer. Here is my answer - “My Friend, I do not advice to touch system table. Please do not go that route. It can be dangerous and not appropriate. The issue which you faced today is what I used to face in early career as well I still face it often. There are two sets of argument I have observed – there are people who see no value in the name of the object and name objects like obj1, obj2 etc. There are sets of people who carefully chose the name of the object where object name is self-explanatory and almost tells a story. I am not here to take any side in this blog post – so let me go to a quick solution for your problem. Note: Following should not be directly practiced on Production Server. It should be properly tested on development server and once it is validated they should be pushed to your production server with your existing deployment practice. The answer is here assuming you have regular stored procedures and you are working on the Development NON Production Server. Go to Server Note >> Databases >> DatabaseName >> Programmability >> Stored Procedure Now make sure that Object Explorer Details are open (if not open it by clicking F7). You will see the list of all the stored procedures there. Now you will see a list of all the stored procedures on the right side list. Select either all of them or the one which you believe are relevant to your query. Now… Right click on the stored procedures >> SELECT DROP and CREATE to >> Now select New Query Editor Window or Clipboard. Paste the complete script to a new window if you have selected Clipboard option. Now press Control+H which will bring up the Find and Replace Screen. In this screen insert the column to be replaced in the “Find What”box and new column name into “Replace With” box. Now execute the whole script. As we have selected DROP and CREATE to, it will created drop the old procedure and create the new one. Another method would do all the same procedure but instead of DROP and CREATE manually replace the CREATE word with ALTER world. There is a small advantage in doing this is that if due to any reason the error comes up which prevents the new stored procedure to be created you will have your old stored procedure in the system as it is. “ Well, this was my answer to the question which I have received. Do you see any other workaround or solution? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SQL SERVER – Beginning New Weekly Series – Memory Lane – #001

    - by pinaldave
    I am introducing a new series today.  This series is called “Memory Lane.”  From the last six years and 2,300 articles, there are fantastic articles I keep revisiting.  Sometimes when I read old blog posts I think I should have included something or added a bit more to the topic.  But for many articles, I still feel they are fantastic (even after six years) and could be read again and again. I have also found that after six years of blogging, readers will write to me and say “Pinal, why don’t you write about X, Y or Z.”  The answer is: I already did!  It is here on the blog, or in the comments, or possibly in one of my books.  The solution has always been there, it is simply a matter of finding it and presenting it again.  That is why I have created Memory Lane.  I will be listing the best articles from the same week of the past six years.  You will find plenty of reading material every Saturday from articles of SQLAuthority past. Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Query to Display Foreign Key Relationships and Name of the Constraint for Each Table in Database My blogging journey began with this blog post. As many of you know my journey began with creating a repository of my scripts. This was very first script which I had written to find out foreign key relationship and constraints. The same query was updated later on using the new SYS schema modification in SQL Server. Version 1: Using sys.schema Version 2: Using sys.schema and additional columns 2007 Milestone Posts – 1 Year (365 blogs) and 1 Million Views When I reached 1st week of Nov in 2007 SQLAuthority.com blog had around 365 blog posts and 1 Million Views. I was not obsessed with the statistics before but this was indeed an interesting moment for me as I was blogging for myself and did not realize that so many people are reading my blog. In year 2006 there were not many bloggers so blogging was new to me as well. I was learning it as I go. 2008 Stored Procedure WITH ENCRYPTION and Execution Plan If you have stored procedure and its code is encrypted when you execute it what will be displayed in the execution plan. There are two kinds of execution plans 1) Estimated and 2) Actual. It will be indeed interesting to know what is displayed in both the cases when Stored Procedure is encrypted. What is your guess? Now go ahead and click on here and figure out your answer. If the user is not able to login into SQL Server due to any error or issues there were two different blog post addresses the same issue here and here. 2009 It seems like Nov is the month of SQLPASS month. In 2009 on the same week I was in USA attending SQLPASS event. I had a fantastic experience attending the event. Here are the blog posts covering the subject Day 1, Day 2, Day 3, Day 4 2010 Finding the last backup time for all the databases This little script is very powerful and instantly gives details when was the last time your database backup performed. If you are reading this blog post – I say just go ahead and check if everything is alright on your server and you have all the necessary latest backup. It is better to be safe than sorrow. Version 1: Above script was improved to get more details about the database Version 2: This version of the script will include pretty much have all the backup related information in a single script. Do not miss to save it for future use. Are you a Database Administrator or a Database Developer? Three years ago I created a very small survey and the results which I have received are very interesting. The question was asking what is the profile of the visitor of that blog post and I noticed that DBA and Developers have balanced with little inclination towards Developers. Have you voted so far? If not, go ahead! 2011 New Book Released – SQL Server Interview Questions And Answers One year ago, on November 3, 2011 I published my book SQL Server Interview Questions and Answers.  The book has a lot of great reviews, and we have even received emails telling us this book was a life changer because it helped get them a great new job.  I don’t think anyone can get a job just from my book.  It was the individual who studied hard and took it seriously, and was determined to learn something new.  The book might have helped guide them and show them the topics to study, but they spent their own energy on it.  It was their own skills that helped them pass the exam. So, in this very first installment, I would like to thank the readers for accepting our book, for giving it great reviews and for using it and sharing it.  Our goal in writing this book was to help others, and it seems like we succeeded. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • Why is my mssql query failing?

    - by Eric Reynolds
    connect(); $arr = mssql_fetch_assoc(mssql_query("SELECT Applications.ProductName, Applications.ProductVersion, Applications.ProductSize, Applications.Description, Applications.ProductKey, Applications.ProductKeyID, Applications.AutomatedInstaller, Applications.AutomatedInstallerName, Applications.ISO, Applications.ISOName, Applications.Internet, Applications.InternetURL, Applications.DatePublished, Applications.LicenseID, Applications.InstallationGuide, Vendors.VendorName FROM Applications INNER JOIN Vendors ON Applications.VendorID = Vendors.VendorID WHERE ApplicationID = ".$ApplicationID)); $query1 = mssql_query("SELECT Issues.AppID, Issues.KnownIssues FROM Issues WHERE Issues.AppID=".$ApplicationID); $issues = mssql_fetch_assoc($query1); $query2 = mssql_query("SELECT ApplicationInfo.AppID, ApplicationInfo.Support_Status, ApplicationInfo.UD_Training, ApplicationInfo.AtomicTraining, ApplicationInfo.VendorURL FROM software.software_dbo.ApplicationInfo WHERE ApplicationInfo.AppID = ".$ApplicationID); $row = mssql_fetch_assoc($query2); function connect(){ $connect = mssql_connect(DBSERVER, DBO, DBPW) or die("Unable to connect to server"); $selected = mssql_select_db(DBNAME, $connect) or die("Unable to connect to database"); return $connect; } Above is the code. The first query/fetch_assoc works perfectly fine, however the next 2 queries fail and I cannot figure out why. Here is the error statement that shows up from php: Warning: mssql_query() [function.mssql-query]: message: Invalid object name 'Issues'. (severity 16) in /srv/www/htdocs/agreement.php on line 47 Warning: mssql_query() [function.mssql-query]: General SQL Server error: Check messages from the SQL Server (severity 16) in /srv/www/htdocs/agreement.php on line 47 Warning: mssql_query() [function.mssql-query]: Query failed in /srv/www/htdocs/agreement.php on line 47 Warning: mssql_fetch_assoc(): supplied argument is not a valid MS SQL-result resource in /srv/www/htdocs/agreement.php on line 48 Warning: mssql_query() [function.mssql-query]: message: Invalid object name 'software.software_dbo.ApplicationInfo'. (severity 16) in /srv/www/htdocs/agreement.php on line 51 Warning: mssql_query() [function.mssql-query]: General SQL Server error: Check messages from the SQL Server (severity 16) in /srv/www/htdocs/agreement.php on line 51 Warning: mssql_query() [function.mssql-query]: Query failed in /srv/www/htdocs/agreement.php on line 51 Warning: mssql_fetch_assoc(): supplied argument is not a valid MS SQL-result resource in /srv/www/htdocs/agreement.php on line 52 The error clearly centers around the fact that the query is not executing. In my database I have a table called Issues and a table called ApplicationInfo so I am unsure why it is telling me that they are invalid objects. Any help would be appreciated. Thanks, Eric R.

    Read the article

  • Tuning recommendations from mysqltuner.pl: query_cache_limit

    - by knorv
    The mysqltuner.pl script gives me the following recommendation: query_cache_limit (> 1M, or use smaller result sets) And MySQL status output shows: mysql> SHOW STATUS LIKE 'Qcache%'; +-------------------------+------------+ | Variable_name | Value | +-------------------------+------------+ | Qcache_free_blocks | 12264 | | Qcache_free_memory | 1001213144 | | Qcache_hits | 3763384 | | Qcache_inserts | 54632419 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 6656246 | | Qcache_queries_in_cache | 55280 | | Qcache_total_blocks | 122848 | +-------------------------+------------+ 8 rows in set (0.00 sec) From the status output above, how can I judge whether or nor the suggested increase in query_cache_limit is needed?

    Read the article

  • Tuning OS X Virtual Memory

    - by dcolish
    I've noticed some really odd results form vm_stat on OSX 10.6. According to this, its barely hitting the cache. Searches of pretty much everywhere I could think of turn up little to explain why the rate is so low. I asked a few friends and they're seeing the same thing. What gives and how can I make it better? Mach Virtual Memory Statistics: (page size of 4096 bytes) Pages free: 78609. Pages active: 553411. Pages inactive: 191116. Pages speculative: 6198. Pages wired down: 153998. "Translation faults": 116031508. Pages copy-on-write: 2274338. Pages zero filled: 33360804. Pages reactivated: 264378. Pageins: 1197683. Pageouts: 43756. Object cache: 20 hits of 1550639 lookups (0% hit rate)

    Read the article

  • Query by datetime in JDOQL / Java / GAE

    - by Jan Kuboschek
    I'm working on a GAE app. I want to query datastore and retrieve all records between startDate and endDate. Each record has a datetime field. I'm using a query similar to this (the below code is something I quickly grabbed - I'm not near my developer machine.): Query query = pm.newQuery(Employee.class); query.setFilter("lastName == lastNameParam"); query.setOrdering("hireDate desc"); query.declareParameters("String lastNameParam"); try { List results = (List) query.execute("Smith"); if (results.iterator().hasNext()) { for (Employee e : results) { // ... } } else { // ... no results ... } } finally { query.closeAll(); } How do I have to format the date to form a correctly working query? How is the datetime stamp stored in datastore? As timestamp? Fully formatted? I can't find ANY information on this. Please help.

    Read the article

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • Sphinx search distributed index tuning

    - by Andriy Bohdan
    I'm deciding how to split 3 large sphinx indexes between 3 servers. Each of the 3 indexes is searched separately. What's more effective: to host each index on separate machine Example machine1 - index1 machine2 - index2 machine3 - index3 or to split each index into 3 parts and host each part of the same index on separate machine. Example machine1 - index1_chunk1, index2_chunk1, index3_chunk1 machine2 - index1_chunk2, index2_chunk2, index3_chunk2 machine3 - index1_chunk3, index2_chunk3, index3_chunk3 ?

    Read the article

  • NHibernate Query object collection issue

    - by Mahesh
    Hi, I am new to NHibernate and need some information regarding the internal working of the engine: I have a table called Student and the design is as follows: RollNo Name City Postcode and there are 5 more columns like this. I have School class and mappings associated with it. I am querying RollNo and Name using session as given below: IQuery query = session.CreateQuery("SELECT RollNo,Name FROM Student); Executing query.List resulting in error because the query is returning object[][]. Now, I changed the query as given below: IQuery query = session.CreateQuery("FROM Student); Executing query.List on this query yeilds the desired results. But, the results contain more data than I want. Could you please let me know the query to which I can get RollNo and Name from Student and castable as Student collection. Thanks, Mahesh

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • Tuning SQL Server 2008 for web applications

    - by Kibbee
    In one of the Stackoverflow podcasts, I remember Jeff Atwood saying that there was a configuration option in SQL Server 2008 which cuts down on locking, and was kind of an alternative to using "with (nolock)" in all your queries. Does anybody know how to enable the feature he was talking about, possibly even Jeff himself. I'm looking at deploying SQL Server 2008, and want to see if using a feature like this would help out my web application.

    Read the article

  • Don't display dynamic query in result

    - by Tom Andrews
    Hi all, Is it possible to hide a dynamic query from the result sets provided from a Stored Procedure? I am using the @@rowcount of the dynamic query to set a variable that is used to determine whether another query runs or not. The other query is used by code that I cannot change - hence why I am changing the Stored Procedure. The dynamic query returns as the first result set from the Stored Procedure is now the result of the dynamic query which currently is "breaking" the calling code. Thanks in advance

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • codeigniter mulitple LIKE db query using associative array- but all from the same column name...?

    - by Inigo
    Hi, I'm trying to query my database using codeigniter's active record class. I have a number of blog posts stored in a table. The query is for a search function, which will pull out all the posts that have certain categories assigned to them. So the 'category' column of the table will have a list of all the categories for that post in no particular order, separated by commas, like so: Politics,History,Sociology.. etc. If a user selects, say, Politics and History, The titles of all the posts that have BOTH these categories should be returned. Right? So, the list of categories queried will be the array $cats. I thought this would work- foreach ($cats as $cat){ $this->db->like('categories',$cat); } By Producing this- $this-db-like('categories','Politics'); $this-db-like('categories','History'); (Which would produce- 'WHERE categories LIKE '%Politics%' AND categories LIKE '%History%') But it doesn't work, it seems to only produce the first statement. The problem I guess is that the column name is the same for each of the chained queries. There doesn't seem to be anything in the CI user guide about this (http://codeigniter.com/user_guide/database/active_record.html) as they seem to assume that each chained statement is going to be for a different column name. Does anyone know how I could do this? Thanks! edit- Of course it is not possible to use an associative array in one statement as it would have to contain duplicate keys- in this case every key would have to be 'categories'...

    Read the article

  • Cannot use a Like query in a JSP prepared statement?

    - by SeerUK
    OK, first the query code and query: ps = conn.prepareStatement("select instance_id, ? from eam_measurement where resource_id in (select RESOURCE_ID from eam_res_grp_res_map where resource_group_id = ?) and DSN like '?' order by 2"); ps.setString(1,"SUBSTR(DSN,27,16)"); ps.setInt(2,defaultWasGroup); ps.setString(3,"%Module=jvmRuntimeModule:freeMemory%"); rs = ps.executeQuery(); while (rs.next()) { bla blah blah blah ... Returns an empty resultSet. Through basic debugging I have found its the 3rd bind that is the problem i.e. DSN like '?' I have tried all kinds of variations, the most sensible of which seemed to be using: DSN like concat('%',?,'%') bit that doesn' work as I am missing the ' ' either side of the concatenated string so I try DSN like ' concat('%',Module=P_STAG_JDBC01:poolSize,'%') ' order by 2 but I just can't seem to find a way to get them in that works. What am I missing? :) Thanks!

    Read the article

  • Tuning garbage collections for low latency

    - by elec
    I'm looking for arguments as to how best to size the young generation (with respect to the old generation) in an environment where low latency is critical. My own testing tends to show that latency is lowest when the young generation is fairly large (eg. -XX:NewRatio <3), however I cannot reconcile this with the intuition that the larger the young generation the more time it should take to garbage collect. The application runs on linux, jdk 6 before update 14, i.e G1 not available.

    Read the article

  • Recommendations on apache tuning for chat application

    - by sofia
    Hi, I have a chat application (xmpp / muc) that is going to be served by apache (we might change to nginx later but right now it's not easily done). If a user is in 2 rooms, he'll have between 2 and 4 active connections to the server (long-polling connections), so if we have 200 users per room and we have 5 rooms, what should the ServerLimit, MaxClients be set to? For example, these are the default values: ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 Thanks,

    Read the article

  • What are the Limitations for Connecting to an Access Query in Excel

    - by thornomad
    I have an Access 2007 database that has a number of tables, some are fairly large (100,000+ records); I have created a union query to pull some of the same types of data from multiple tables into one large query for pivot table manipulation and reporting. For example: SELECT Language FROM Table1 UNION ALL SELECT Language FROM Table2 UNION ALL SELECT Language FROM Table3; This works. I found, quickly, however, that a union query will not show up when connecting to the datasource from Excel 2007. So, I created a second query to reference the union query. Like so: SELECT * FROM [The Above Union Query]; This query works and it, initially, was accessible from Excel. Time passed, I've added more data. Suddenly, when I connect to my Access database from Excel my query referencing the union has disappeared. MS Access shows no signs of an issue (data displays in Access) and my other non-union queries are showing up in Excel 2007 ... but not the one that references the union. What could be going on? Why did it disappear? I noticed if I switch some of the referenced tables in the union query to a smaller table (with less rows) all of sudden the query appears in Excel again. At least, I think that's what the difference is. I really can't put my finger on why some of the union queries won't show up and some will. Am stumped and need some guidance. Thanks.

    Read the article

  • Table Adapter query Builder ASP.NET

    - by kmsony32
    I want to connect my Table adapter with the Oracle, for that I am preparing a sql query with the query builder. The query which is running fine in Oracle but makes syntax error with query builder.It shows unable to parse the query text. I am using case statement and joins within the query.

    Read the article

  • Are the ususal database performance-tuning tips invalide for a third-party app like Drupal

    - by Paul Strugger
    When you have a slow database app, the first suggestions that people make is to: Track the slow queries Add appropriate indexes In the case you are building your own application this is very logical, but when you use a CMS like Drupal, that are people have developed and tuned, is this approach valid? I mean, aren't Drupal tables already fine-tuned for performance? Even if I try to see which queries are the slow ones, what could I do about it? Re-write Drupal core?!?

    Read the article

  • SQL SERVER – Mirroring Configured Without Domain – The server network address TCP://SQLServerName:50

    - by pinaldave
    Regular readers of my blog will be aware of my friend who called me few days ago with very a funny SQL Problem SQL SERVER – SSMS Query Command(s) completed successfully without ANY Results. This time, it did not take long before he called me up with another interesting problem, although the issue he was facing this time was not that interesting and also very specific to him, however, he insisted me to share with all of you. Let us understand his situation at first. My friend is preparing for DBA exam Exam 70-450: PRO: Designing, Optimizing and Maintaining a Database Server Infrastructure using Microsoft SQL Server 2008 and for the same, he was trying to set up replication on his local laptop. He had installed two different instances of SQL Server on his computer and every time when he started the mirroring, it failed with common error message. The server network address “TCP://SQLServer:5023? cannot be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. (Microsoft SQL Server, Error: 1418) Well, before he contacted me, he searched online and checked my article written on the error in mirroring. However, he tried all the four suggestions, but it did not solve his problem. He called me at a reasonable time of late evening (unlike last time, which was midnight!). I even tried all the seven different suggestions myself, as previously proposed in my article; however, none of them worked. While looking at closely at services, I noticed something very simple. He was running all the instances on ‘Network Services’. In fact, his computer was a stand-alone computer. There was no network at all. Also, there was no domain or any other advance network concepts implemented. I just changed services from ‘Network Services’ to ‘Local System’ as his SQL Server was running on his local system and there were no network services. This prompted to restart the services. As this was not the production server and his development machine, we restarted the services on the laptop (do not restart services on production server without proper planning). After changing the ‘services log on’ account to localsystem, when he attempted to reconfigure the mirroring it worked right away. As usually in production server, proper domains are configured and advance network concepts are implemented I had never faced this type of problem earlier. My friend insisted to post this solution to his situation, wherein there was no domain configured and setting up mirroring was throwing an error. According to him, this is bound to help people, like him, who are preparing for certification using single system. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Certifications, SQL Mirroring

    Read the article

  • SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database

    - by pinaldave
    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it. VDS Technologies has all the spatial files for every location for free. You can download the spatial file from here. If you cannot find the spatial file you are looking for, please leave a comment here, and I will send you the necessary details. Unzip the file to a folder and it will have the following content. Then, download Shape2SQL tool from SharpGIS. This is one of the best tools available to convert shapefiles to SQL tables. Afterwards, run the .exe file. When the file is run for the first time, it will ask for the database properties. Provide your database details. Select the appropriate shape files and the tool will fill up the essential details automatically. If you do not want to create the index on the column, uncheck the box beside it. The screenshot below is simply explains the procedure. You also have to be careful regarding your data, whether that is GEOMETRY or GEOGRAPHY. In this example,  it is GEOMETRY data. Click “Upload to Database”. It will show you the uploading process. Once the shape file is uploaded, close the application and open SQL Server Management Studio (SSMS). Run the following code in SSMS Query Editor. USE Spatial GO SELECT * FROM dbo.world GO This will show the complete map of world after you click on Spatial Results in Spatial Tab. In Spatial Results Set, the Zoom feature is available. From the Select label column, choose the country name in order to show the country name overlaying the country borders. Let me know if this tutorial is helpful enough. I am planning to write a few more posts about this later. Note: Please note that the images displayed here do not reflect the original political boundaries. These data are pretty old and can probably draw incorrect maps as well. I have personally spotted several parts of the map where some countries are located a little bit inaccurately. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Spatial, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >