Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 462/2505 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • In SQL Server 2008, when would I use a full text index that covered several tables?

    - by Suddy
    I wanted to do a full text search across several related tables in SQL Server 2008. From browsing this site I've realised the best option is via a view, but initially I thought I was meant to add several tables to the same full text index via Management Studio. I started to do this and realised the index would have no idea how they were related, so my question is: when would I want to have a full text index covering several tables in this way? Apologies for the vagueness, I am just trying to satisfy my curiosity after Google let me down.

    Read the article

  • I've got to update a column in one SQL table with a counter stored in another table, and update that

    - by Bucket
    I'm using SQL server 2005 (for testing) & 2007 (for production). I have to add a unique record ID to all the records in my table, in an existing column, using a "last record ID" column from another table. So, I'm going to do some sort of UPDATE of my table, but I have to get the "last record ID" from the other table, increment it, update THAT table and then update my record. Can anyone give me an example of how to do this? Other users may be incrementing the counter also.

    Read the article

  • why LINQ 2 SQL sometime add a field like select 1 as test, others valid fields....

    - by Fredou
    I have to concat 2 linq2sql query and I have an issue since the 2 query doesn't return the same number of columns, what is weird is after a .ToList() on the queries, they can concat without problem. The reason is, for some linq2sql reason, I have 2 more column named test and test2 which come from 2 left outer join that linq2sql automatically create, something like "select 1 as test, tablefields" Is there any good reason for that? how to remove this extra "1 as test" field? here a few of examples of what it look like: google result for linq 2 sql "select 1 as test"

    Read the article

  • How do I search nodes of XML document for text? Or convert to SQL tables?

    - by netefficacy
    Hi I have an XML file and would like to run a search on the nodes for text that matches user input. My options are: Convert the XML file to a SQL table and run the search against the table records. Search the XML nodes themselves. The problem is that I cannot find a open source conversion utility, nor can I figure out how to search the XML nodes. I can use PHP, Ruby, or Python for the search code. Any pointers on how can I do 1 or 2? Thanks

    Read the article

  • building an ASP NET MVC site, should i go with linq to sql?

    - by aspm
    so i'm about to start a new website from scratch and i've spent about a week trying to figure out what technology to go with. i'm sold on ASP NET MVC. i'm 100% sure i'm going to love using that. but what i am not so sure about yet is using LINQ 2 SQL. so far i've gathered some data... 1) stack overflow uses it - can't be that bad 2) can be REALLY slow if you don't take advantage of compiled queries 3) will always be slower than ADO net, but can be almost just as fast if using #2 in the proper places 4) is NOT the preferred MS solution (there was a thread here on SO about dropping support) i'm itching to use it, but just want to make sure it's the best for me. i come from a heavy ADO/stored procedure and traditional asp net background (this will be my first experience with ASP MVC).

    Read the article

  • Attention all SQL gods! Query help needed.

    - by gurun8
    I need a little help putting together a SQL query that will give me the following resultsets: and The data model looks like this: The tricky part for me is that the columns to the right of the "Product" in the resultset are really columns in the database but rather key/value pairs spanned across the data model. Table data is as follows: My apologies in advance for the image heavy question and the image quality. This just seemed like the easiest way to convey the information. It'll probably take someone less time to write the query statement to achieve the results. By the way, the "product_option" table image is truncated but it illustrated the general idea of the data structure. The MySQL server version is 5.1.45.

    Read the article

  • How to implement nested SQL transactions with ADO.NET?

    - by manza_jurjur
    I need to implement nested transactions in .NET using ADO.NET. The situation is as follows: --> Start Process (Begin Transaction) --> Begin Transaction for step 1 --> Step 1 --> Commit transaction for step 1 --> Begin transaction for step 2 --> Step 2 --> Rollback transaction for step 2 --> etc ... --> End Process (Commit or Rollback ALL commited steps) Can that be done with transaction scopes? Could anyone post an example? In addition I'd need the prcoess to work for SQL Server 2005 AND Oracle 10g databases... will transaction scopes work with both database engines?

    Read the article

  • How to prompt for username and password entry in C# / SQL ASP.NET web app?

    - by salvationishere
    How do I prompt for username and password in my C#/SQL web application? This was developed in VS 2008 on a 32-bit XP. The current connection string I'm using in my web.config file is: <add name="AdventureWorksConnectionString2" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Persist Security Info=false; " providerName="System.Data.SqlClient" /> When I select Basic Authentication it pops up the warning: "The authentication option you have chosen results in passwords being sent over the network without data encryption..." How do I choose this authentication method and still send passwords over securely? So essentially I am looking for the most secure authentication method but that still requires users to input password?

    Read the article

  • In SQL how do I get the maximum value for an integer?

    - by CoffeeMonster
    Hi, I am trying to find out the maximum value for an integer (signed or unsigned) from a MySQL database. Is there a way to pull back this information from the database itself? Are there any built-in constants or functions I can use (either standard SQL or MySQL specific). At http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html it lists the values - but is there a way for the database to tell me. The following gives me the MAX_BIGINT - what I'd like is the MAX_INT. SELECT CAST( 99999999999999999999999 AS SIGNED ) as max_int; # max_int | 9223372036854775807 Thanks in advance,

    Read the article

  • Linq to SQL - How to compare against a collection in the where clause?

    - by Sgraffite
    I'd like to compare against an IEnumerable collection in my where clause. Do I need to manually loop through the collection to pull out the column I want to compare against, or is there a generic way to handle this? I want something like this: public IEnumerable<Cookie> GetCookiesForUsers(IEnumerable<User> Users) { var cookies = from c in db.Cookies join uc in db.UserCookies on c.CookieID equals uc.CookieID join u in db.Users on uc.UserID equals u.UserID where u.UserID.Equals(Users.UserID) select c; return cookies.ToList(); } I'm used to using the lambda Linq to SQL syntax, but I decided to try the SQLesque syntax since I was using joins this time. What is a good way to do this?

    Read the article

  • SQL Server. Stored procedure to get the biweekly periods

    - by Yada
    I'm currently trying to write a stored procedure that can compute the biweekly periods when a date is passed in as a parameter. The business logic: the first Monday of the year is first day of the biweekly period. For example in 2010: period period_start period_end 1 2010-01-04 2010-01-17 2 2010-01-18 2010-01-31 3 2010-02-01 2010-02-14 .... 26 2010-12-20 2011-01-02 Passing today's date of 2010-12-31 will return 26, 2010-12-20 and 2011-01-02. I'm not too strong in T-SQL. Any help is appreciated. Thanks

    Read the article

  • In SQL Server 2000, how to delete the specified rows in a table that does not have a primary key?

    - by Yousui
    Hi, Let's say we have a table with some data in it. IF OBJECT_ID('dbo.table1') IS NOT NULL BEGIN DROP TABLE dbo.table1; END CREATE TABLE table1 ( DATA INT ); --------------------------------------------------------------------- -- Generating testing data --------------------------------------------------------------------- INSERT INTO dbo.table1(data) SELECT 100 UNION ALL SELECT 200 UNION ALL SELECT NULL UNION ALL SELECT 400 UNION ALL SELECT 400 UNION ALL SELECT 500 UNION ALL SELECT NULL; How to delete the 2nd, 5th, 6th records in the table? The order id defined by the following query. SELECT data FROM dbo.table1 ORDER BY data DESC; Note, this is in SQL Server 2000 environment. Thanks.

    Read the article

  • SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

  • SQL Server 2005 Reporting Services: How to count rows that are not null? Any hints for calculating t

    - by user329266
    Is there a way to count only records that are not null; similar to "COUNTA" in Excel? I would think this would be very simple process, but nothing I have tried has worked. If necessary, I can try to work this into my SQL query, but the query is already incredibly complicated. Also, I've found very little documentation for how to calculate report totals, and how to total from groups. Would anyone have any recommendations on what to use as a reference?

    Read the article

  • LINQ to SQL, how to write a method which checks if a row exists when we have multiple tables

    - by Beles
    Hi, I'm trying to write a method in C# which can take as parameter a tabletype, column and a columnvalue and check if the got a row with a with value the method looks like: public object GetRecordFromDatabase(Type tabletype, string columnname, string columnvalue) I'm using LINQ to SQL and need to to this in a generic way so I don't need to write each table I got in the DB. I have been doing this so far for each table, but with more than 70 of these it becomes cumbersome and boring to do. Is there a way to generate the following code dynamically, And swap out the hardcoded tablenames with the values from the parameterlist? In this example I have a table in the DB named tbl_nation, which the DataContext pluralizes to tbl_nations, and I'm checking the column for the value if (DB.tbl_nations.Count(c => c.code.Equals(columnvalue)) == 1) { return DB.tbl_nations.Single(c => c.code.Equals(columnvalue)); }

    Read the article

  • Error in SQL. Can not find it.

    - by kmsboy
    Error in SQL. Can not find it. DECLARE @year VARCHAR (4), @month VARCHAR (2), @day VARCHAR (2), @weekday VARCHAR (2), @hour VARCHAR (2), @archivePath VARCHAR (128), @archiveName VARCHAR (128), @archiveFullName VARCHAR (128) SET @year = CAST(DATEPART(yyyy, GETDATE()) AS VARCHAR) SET @month = CAST(DATEPART(mm, GETDATE()) AS VARCHAR) SET @day = CAST(DATEPART(dd, GETDATE()) AS VARCHAR) SET @weekday = CAST(DATEPART (dw, GETDATE()) AS VARCHAR) SET @hour = CAST(DATEPART (hh, GETDATE()) AS VARCHAR) SET @archivePath = 'd:\1c_new\backupdb\' SET @archiveName = 'TransactionLog_' + @year + '_' + @month + '_' + @day + '_' + @hour + '.bak' SET @archiveFullName = @archivePath + @archiveName BACKUP LOG [xxx] TO DISK = @archiveFullName WITH INIT , NOUNLOAD , NAME = N'?????????? ??? ??????????', SKIP , STATS = 10, DESCRIPTION = N'?????????? ??? ??????????', NOFORMAT

    Read the article

  • Stringly typed values table in sql, is there a better way to do this? (we're using MSSQL)

    - by Jason Hernandez
    We have have a table layout with property names in one table, and values in a second table, and items in a third. (Yes, we're re-implementing tables in SQL.) We join all three to get a value of a property for a specific item. Unfortunately the values can have multiple data types double, varchar, bit, etc. Currently the consensus is to stringly type all the values and store the type name in the column next to the value. tblValues DataTypeName nvarchar Is there a better, cleaner way to do this?

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • Swt file dialog too much files selected?

    - by InsertNickHere
    Hi there, the swt file dialog will give me an empty result array if I select too much files (approx. 2500files). The listing shows you how I use this dialog. If i select too many sound files, the syso will show 0. Debugging tells me, that the files array is empty in this case. Is there any way to get this work? FileDialog fileDialog = new FileDialog(mainView.getShell(), SWT.MULTI); fileDialog.setText("Choose sound files"); fileDialog.setFilterExtensions(new String[] { new String("*.wav") }); Vector<String> result = new Vector<String>(); fileDialog.open(); String[] files = fileDialog.getFileNames(); for (int i = 0, n = files.length; i < n; i++) { if( !files[i].contains(".wav")) { System.out.println(files[i]); } StringBuffer stringBuffer = new StringBuffer(); stringBuffer.append(fileDialog.getFilterPath()); if (stringBuffer.charAt(stringBuffer.length() - 1) != File.separatorChar) { stringBuffer.append(File.separatorChar); } stringBuffer.append(files[i]); stringBuffer.append(""); String finalName = stringBuffer.toString(); if( !finalName.contains(".wav")) { System.out.println(finalName); } result.add(finalName); } System.out.println(result.size()) ;

    Read the article

  • LuaEdit can't find module when Lua files all in the same folder

    - by joverboard
    I downloaded LuaEdit to use as an IDE and debug tool however I'm having trouble using it for even the simplest things. I've created a solution with 2 files in it, all of which are stored in the same folder. My files are as follows: --startup.lua require("foo") test("Testing", "testing", "one, two, three") --foo.lua foo = {} print("In foo.lua") function test(a,b,c) print(a,b,c) end This works fine when in my C++ compiler when accessed through some embed code, however when I attempt to use the same code in LuaEdit, it crashes on line 3 require("foo") with an error stating: module 'foo' not found: no field package.preload['foo'] no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo\init.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo\init.lua' no file '.\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.dll' no file 'C:\Program Files (x86)\LuaEdit 2010\loadall.dll' no file '.\battle.dll' I have also tried creating these files prior to adding them to a solution and still get the same error. Is there some setting I'm missing? It would be great to have an IDE/debugger but it's useless to me if it can't run linked functions.

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Subversion and Quickbooks Files

    - by Jorge Fernandez
    I currently have a large problem on one of the file servers I manage for an Accounting Firm. Quickbooks has a tendency to create multiple files of the same thing over and over to prevent data loss. This is a good thing when you handle just a few files. But at an accounting firm it becomes a problem. Some of the older clients have 5-10 files in their respective folders, each with a different cut off date. Because of user error some of these file aren't labeled properly with their correct cutoff dates. This is where Subversion came to mind. Using the revision system would allow for 1 file to be master and have all of its revisions. Has anyone ever tried this with Quickbooks files? I've only used SVN with code for applications making each file size much smaller. How does SVN stand up with larger files like 10-25MB? I'm not exactly sure how SVN handles revisions - does it keep a duplicate of the files and duplicates the disk space space needed?

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >