Search Results

Search found 24674 results on 987 pages for 'visual studio 2005 tips'.

Page 132/987 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • SQL server partitioning

    - by durilai
    I have a table that has millions of records and we are looking at implementing table partitioning. Looking at it we have a foreign key "GroupID" that we would like to partition on. Is this possible? The Group will have more entries added to it, so as new GroupID's are added can the partition's be made dynamically?

    Read the article

  • how to optimize sql server table for faster response?

    - by Thomas
    i found a in a table there are 50 thousands records and it takes one minute when we fetch data from sql server table just by issuing a sql. there are one primary key that means a already a cluster index is there. i just do not understand why it takes one minute. beside index what are the ways out there to optimize a table to get the data faster. in this situation what i need to do for faster response. also tell me how we can write always a optimize sql. please tell me all the steps in detail for optimization. thanks.

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Overloading methods in C#

    - by Craig Johnston
    Is there a way to simplify the process of adding an overloaded method in C# using VS2005? In VB6, I would have just added an Optional parameter the function, but in C# do I have to have to type out a whole new method with this new parameter?

    Read the article

  • created persisted computed columns when the user defined scalar function appears to be non-determini

    - by Ralph Shillington
    I have a scalar UDF that I know to be deterministic, however SQL doesn't. Is there a way to declare it as deterministic so that I can then use it in a persisted computed column definition? further clarification: The purpose of this exercise is that I need to harvest out specific values from an XML column on the row. I can't use the value method of the xml column in my computed column definition, but I can use it in a UDF. I know the xpath query in the value method will produce the same output give the same input so while I certainly understand that not all calls to value will be deterministic I want to assert that mine is.

    Read the article

  • Why use the INCLUDE clause when creating an index?

    - by Cory
    While studying for the 70-433 exam I noticed you can create a covering index in one of the following two ways. CREATE INDEX idx1 ON MyTable (Col1, Col2, Col3) -- OR -- CREATE INDEX idx1 ON MyTable (Col1) INCLUDE (Col2, Col3) The INCLUDE clause is new to me. Why would you use it and what guidelines would you suggest in determining whether to create a covering index with or without the INCLUDE clause?

    Read the article

  • Would this rollback/stop all records from inserting?

    - by chobo2
    Hi I been going through this tutorial http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx and them make a SP like this CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc Now my requirements require me to mass insert and mass update one after another. So first I am wondering can I merge those into one SP? I am not sure how it works with this OPENXML but I would think it would just be making sure that the XPath is right. Next what happens while it would be running this combined SP and something goes wrong. Would it roll back all the records or just stop and the records that happened before this event that crashed it would be inserted?

    Read the article

  • SSAS OLAP MDX and relationships

    - by Sonic Soul
    I new to OLAP, and still not sure how to create a relationship between 2 or more entities. I am basing my cube on views. For simplicity sake let's call them like this: viewParent (ParentID PK) viewChild (ChildID PK, ParentID FK) these views have more fields, but they're not important for this question. in my data source, i defined a relationship between viewParent and viewChild using ParentID for the link. As for measures, i was forced to create separate measures for Parent and Child. in my MDX query however, the relationship does not seem to be enforced. If i select record count for parent, child, and add some filters for the parent, the child count is not reflecting it.. SELECT { [Measures].[ParentCount],[Measures].[ChildCount] } ON COLUMNS FROM [Cube] WHERE { ( {[Time].[Month].&[2011-06-01T00:00:00]} ,{[SomeDimension].&[Foo]} ) } the selected ParentCount is correct, but ChildCount is not affected by any of the filters (because they are parent filters). However, since i defined a relationship, how can i take advantage of that to filter children by parent filter?

    Read the article

  • SQL Analysis Services - Dimension attributes with a "many" cardinality

    - by MonkeyBrother
    I am creating a cube with the following tables: Customer CustomerID, Name Customer Rep CustomerID, RepID Rep RepID, Name The important thing here is that there is a many to many relationship between Reps and Customers. I want to be able to ask the question "How much sales for customers working with rep 'A'?" In the data source view i set up the relationships between both customerid columns and both repid columns. I set up the rep attribute in the dimension builder and when I try to build the cube I get this error: Errors in the high-level relationship engine. the 'Rep' table that is required for a join cannot be reached based on the relationships in the data source view.

    Read the article

  • LINQ to SQL auto-generated type for stored procedure

    - by StuffHappens
    Hello. I have the following stored procedure ALTER PROCEDURE [dbo].Test AS BEGIN CREATE TABLE ##table ( ID1 int, ID2 int ) DECLARE @query varchar(MAX); INSERT INTO ##table VALUES(1, 1); SELECT * FROM ##table; END And I try to use it from C# code. I use LINQ to SQL as an O/RM. When I add the procedure to DataBaseContext it says that it can't figure out the return value of this procedure. How to modify the stored procedure so that I can use it with LINQ to SQL. Note: I need to have global template table!

    Read the article

  • Different versions in manifest on different machines

    - by Terry777
    Hi guys, Have two machines, both with VS2005 SP1 installed and with the WinSXS showing the same things installed. When one machine builds a particular C++ .dll .vcproj it ends up with <assemblyIdentity type='win32' name='Microsoft.VC80.MFC' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> in its manifest file. But on the other machine it ends up with <assemblyIdentity type='win32' name='Microsoft.VC80.MFC' version='8.0.50608.0 processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> even though this machine does not have '8.0.50608.0' libraries listed in its WinSXS. The .dll built on this machine with the older version referenced has some problems. I have ensured both machines have the same latest source code and references etc.. What could be causing it to build with the different reference? Thanks! Terry

    Read the article

  • Null Value Statement

    - by Sam
    Hi All, I have created a table called table1 and it has 4 columns named Name,ID,Description and Date. I have created them like Name varchar(50) null, ID int null,Description varchar(50) null, Date datetime null I have inserted a record into the table1 having ID and Description values. So Now my table1 looks like this: Name ID Description Date Null 1 First Null One of them asked me to modify the table such a way that The columns Name and Date should have Null values instead of Text Null. I don't know what is the difference between those I mean can anyone explain me the difference between these select statements: SELECT * FROM TABLE1 WHERE NAME IS NULL SELECT * FROM TABLE1 WHERE NAME = 'NULL' SELECT * FROM TABLE1 WHERE NAME = ' ' Can anyone explain me?

    Read the article

  • use of views for validation of an incorrect login-id or an unidentified user

    - by sqlchild
    I read this on msdn: Views let different users to see data in different ways, even when they are using the same data at the same time. This is especially useful when users who have many different interests and skill levels share the same database. For example, a view can be created that retrieves only the data for the customers with whom an account manager deals. The view can determine which data to retrieve based on the login ID of the account manager who uses the view. My question: For the above example , i would have to have a column named Userid/LoginId on my table on which the view is created so that i can apply a check option in the view for this column. and then if a user with a name not in that column tries to enter data , then he/she is blocked.

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

  • SSRS - Checking whether the data is null

    - by NLV
    Hello I've the following expression in my report. =FormatNumber(MAX(Fields!Reading.Value, "CellReading_Reading"),3) Now when the dataset is empty 'Fields!Reading.Value' becomes empty and finding their maximum is invalid. How can i check if the entire column is empty? I tried the following with no luck. =iif(IsNothing(Fields!.Reading.Value),"",FormatNumber(MAX(Fields!Reading.Value, "CellReading_Reading"),3)) But still i'm getting #Error in the report. I also checked out link and was not able to get a clue from it. I want to handle it in the report level. Thank you. NLV

    Read the article

  • How can I pivot these key+values rows into a table of complete entries?

    - by CodexArcanum
    Maybe I demand too much from SQL but I feel like this should be possible. I start with a list of key-value pairs, like this: '0:First, 1:Second, 2:Third, 3:Fourth' etc. I can split this up pretty easily with a two-step parse that gets me a table like: EntryNumber PairNumber Item 0 0 0 1 0 First 2 1 1 3 1 Second etc. Now, in the simple case of splitting the pairs into a pair of columns, it's fairly easy. I'm interested in the more advanced case where I might have multiple values per entry, like: '0:First:Fishing, 1:Second:Camping, 2:Third:Hiking' and such. In that generic case, I'd like to find a way to take my 3-column result table and somehow pivot it to have one row per entry and one column per value-part. So I want to turn this: EntryNumber PairNumber Item 0 0 0 1 0 First 2 0 Fishing 3 1 1 4 1 Second 5 1 Camping Into this: Entry [1] [2] [3] 0 0 First Fishing 1 1 Second Camping Is that just too much for SQL to handle, or is there a way? Pivots (even tricky dynamic pivots) seem like an answer, but I can't figure how to get that to work.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >