Search Results

Search found 111084 results on 4444 pages for 'linq to sql code generato'.

Page 131/4444 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Show me your Linq to SQL architectures!

    - by Brad Heller
    I've been using Linq to SQL for a new implementation that I've been working on. I have about 5000 lines of code and am a little ways from a solid demo. I've been pretty satisfied with Linq to SQL so far -- the tools are excellent and pretty painless and it allows you to get a DAL up and running quickly. That said, there are some major draw backs that I just keep hitting over and over again. Namely how to handle separation of concerns between my DAL and my business layer and juggling that with different data contexts. Here is the architecture I've been using: My repositories do all my data access and they return Linq to SQL objects. Each of my Linq to SQL objects implements an IDetachable interface. A typical implementation looks like this: partial class PaymentDetail : IDetachable { #region IDetachable Members public bool IsAttached { get { return PropertyChanging != null; } } public void Detach() { if (IsAttached) { PropertyChanged = null; PropertyChanging = null; Transaction.Detach(); } } #endregion } Every time I do a DAL operation in my repository I "detach" when I'm done with the object (and it should theoretically detach from any child objects) to remove the DataContext's context. Like I said, this works pretty well, but there are some edge cases that seem to be a big pain in the ass. For instance, my Transaction object has many PaymentDetails. Even when there are no PaymentDetails in that collection it's still attached to the DataContext's context! Thus, if I try to update (I update by Attach()ing to the object and then SubmitChanges()) I get that dreaded "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." message. Anyway, I'm starting to doubt that this technology was a good gamble. Has anyone got a decent architecture that they're willing to share? I'd really love to use this technology but I feel like I spend 1/3 of my time just debugging is retarded quirks!

    Read the article

  • VB.NET switching from ADO.NET to LINQ

    - by Cj Anderson
    I'm VERY new to Linq. I have an application I wrote that is in VB.NET 2.0. Works great, but I'd like to switch this application to Linq. I use ADO.NET to load XML into a datatable. The XML file has about 90,000 records in it. I then use the Datatable.Select to perform searches against that Datatable. The search control is a free form textbox. So if the user types in terms it searches instantly. Any further terms that are typed in continue to restrict the results. So you can type in Bob, or type in Bob Barker. Or type in Bob Barker Price is Right. The more criteria typed in the more narrowed your result. I bind the results to a gridview. Moving forward what all do I need to do? From a high level, I assume I need to: 1) Go to Project Properties -- Advanced Compiler Settings and change the Target framework to 3.5 from 2.0. 2) Add the reference to System.XML.Linq, Add the Imports statement to the classes. So I'm not sure what the best approach is going forward after that. I assume I use XDocument.Load, then my search subroutine runs against the XDocument. Do I just do the standard Linq query for this sort of repeated search? Like so: var people = from phonebook in doc.Root.Elements("phonebook") where phonebook.Element("userid") = "whatever" select phonebook; Any tips on how to best implement?

    Read the article

  • Alternatives to LINQ To SQL on high loaded pages

    - by Alex
    To begin with, I LOVE LINQ TO SQL. It's so much easier to use than direct querying. But, there's one great problem: it doesn't work well on high loaded requests. I have some actions in my ASP.NET MVC project, that are called hundreds times every minute. I used to have LINQ to SQL there, but since the amount of requests is gigantic, LINQ TO SQL almost always returned "Row not found or changed" or "X of X updates failed". And it's understandable. For instance, I have to increase some value by one with every request. var stat = DB.Stats.First(); stat.Visits++; // .... DB.SubmitChanges(); But while ASP.NET was working on those //... instructions, the stats.Visits value stored in the table got changed. I found a solution, I created a stored procedure UPDATE Stats SET Visits=Visits+1 It works well. Unfortunately now I'm getting more and more moments like that. And it sucks to create stored procedures for all cases. So my question is, how to solve this problem? Are there any alternatives that can work here? I hear that Stackoverflow works with LINQ to SQL. And it's more loaded than my site.

    Read the article

  • Problem with LINQ query

    - by Niels Bosma
    The following works fine: (from e in db.EnquiryAreas from w in db.WorkTypes where w.HumanId != null && w.SeoPriority > 0 && e.HumanId != null && e.SeoPriority > 0 && db.Enquiries.Where(f => f.WhereId == e.Id && f.WhatId == w.Id && f.EnquiryPublished != null && f.StatusId != EnquiryMethods.STATUS_INACTIVE && f.StatusId != EnquiryMethods.STATUS_REMOVED && f.StatusId != EnquiryMethods.STATUS_REJECTED && f.StatusId != EnquiryMethods.STATUS_ATTEND ).Any() select new { EnquiryArea = e, WorkType = w }); But: (from e in db.EnquiryAreas from w in db.WorkTypes where w.HumanId != null && w.SeoPriority > 0 && e.HumanId != null && e.SeoPriority > 0 && EnquiryMethods.BlockOnSite(db.Enquiries.Where(f => f.WhereId == e.Id && f.WhatId == w.Id)).Any() select new { EnquiryArea = e, WorkType = w }); + public static IQueryable<Enquiry> BlockOnSite(IQueryable<Enquiry> linq) { return linq.Where(e => e.EnquiryPublished != null && e.StatusId != STATUS_INACTIVE && e.StatusId != STATUS_REMOVED && e.StatusId != STATUS_REJECTED && e.StatusId != STATUS_ATTEND ); } I get the following error: base {System.SystemException}: {"Method 'System.Linq.IQueryable1[X.Enquiry] BlockOnSite(System.Linq.IQueryable1[X.Enquiry])' has no supported translation to SQL."}

    Read the article

  • Advice on Linq to SQL mapping object design

    - by fearofawhackplanet
    I hope the title and following text are clear, I'm not very familiar with the correct terms so please correct me if I get anything wrong. I'm using Linq ORM for the first time and am wondering how to address the following. Say I have two DB tables: User ---- Id Name Phone ----- Id UserId Model The Linq code generator produces a bunch of entity classes. I then write my own classes and interfaces which wrap these Linq classes: class DatabaseUser : IUser { public DatabaseUser(User user) { _user = user; } public Guid Id { get { return _user.Id; } } ... etc } so far so good. Now it's easy enough to find a users phones from Phones.Where(p => p.User = user) but surely comsumers of the API shouldn't need to be writing their own Linq queries to get at data, so I should wrap this query in a function or property somewhere. So the question is, in this example, would you add a Phones property to IUser or not? In other words, should my interface specifically be modelling my database objects (in which case Phones doesn't belong in IUser), or are they actually simply providing a set of functions and properties which are conceptually associated with a User (in which case it does)? There seems drawbacks to both views, but I'm wondering if there is a standard approach to the problem. Or just any general words of wisdom you could share. My first thought was to use extension methods but in fact that doesn't work in this case.

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • .Net Windows Service Throws EventType clr20r3 system.data.sqlclient.sql error

    - by William Edmondson
    I have a .Net/c# 2.0 windows service. The entry point is wrapped in a try catch block yet when I look at the server's application event log I seem a number of "EventType clr20r3" errors that are causing the service to die unexpectedly. The catch block has a "catch (Exception ex)". Each sql commands is of the type "CommandType.StoredProcedure" and are executed with SqlDataReader's. These sproc calls function correctly 99% of time and have all been thoroughly unit tested, profiled, and QA'd. I additionally wrapped these calls in try catch blocks just to be sure and am still experiencing these unhandled exceptions. This only in our production environment and cannot be duplicated in our dev or staging environments (even under heavy load). Why would my error handling not catch this particular error? Is there anyway to capture more detail as to the root cause of the problem? Here is an example of the event log: EventType clr20r3, P1 RDC.OrderProcessorService, P2 1.0.0.0, P3 4ae6a0d0, P4 system.data, P5 2.0.0.0, P6 4889deaf, P7 2490, P8 2c, P9 system.data.sqlclient.sql, P10 NIL. Additionally The Order Processor service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service.

    Read the article

  • Error Handling in T-SQL Scalar Function

    - by hydroparadise
    Ok.. this question could easily take multiple paths, so I will hit the more specific path first. While working with SQL Server 2005, I'm trying to create a scalar funtion that acts as a 'TryCast' from varchar to int. Where I encounter a problem is when I add a TRY block in the function; CREATE FUNCTION u_TryCastInt ( @Value as VARCHAR(MAX) ) RETURNS Int AS BEGIN DECLARE @Output AS Int BEGIN TRY SET @Output = CONVERT(Int, @Value) END TRY BEGIN CATCH SET @Output = 0 END CATCH RETURN @Output END Turns out theres all sorts of things wrong with this statement including "Invalid use of side-effecting or time-dependent operator in 'BEGIN TRY' within a function" and "Invalid use of side-effecting or time-dependent operator in 'END TRY' within a function". I can't seem to find any examples of using try statements within a scalar function, which got me thinking, is error handling in a function is possible? The goal here is to make a robust version of the Convert or Cast functions to allow a SELECT statement carry through depsite conversion errors. For example, take the following; CREATE TABLE tblTest ( f1 VARCHAR(50) ) GO INSERT INTO tblTest(f1) VALUES('1') INSERT INTO tblTest(f1) VALUES('2') INSERT INTO tblTest(f1) VALUES('3') INSERT INTO tblTest(f1) VALUES('f') INSERT INTO tblTest(f1) VALUES('5') INSERT INTO tblTest(f1) VALUES('1.1') SELECT CONVERT(int,f1) AS f1_num FROM tblTest DROP TABLE tblTest It never reaches point of dropping the table because the execution gets hung on trying to convert 'f' to an integer. I want to be able to do something like this; SELECT u_TryCastInt(f1) AS f1_num FROM tblTest fi_num __________ 1 2 3 0 5 0 Any thoughts on this? Is there anything that exists that handles this? Also, I would like to try and expand the conversation to support SQL Server 2000 since Try blocks are not an option in that scenario. Thanks in advance.

    Read the article

  • Complex SQL Query similar to a z order problem

    - by AaronLS
    I have a complex SQL problem in MS SQL Server, and in drawing on a piece of paper I realized that I could think of it as a single bar filled with rectangles, each rectangle having segments with different Z orders. In reality it has nothing to do with z order or graphics at all, but more to do with some complex business rules that would be difficult to explain. Howoever, if anyone has ideas on how to solve the below that will give me my solution. I have the following data: ObjectID, PercentOfBar, ZOrder (where smaller is closer) A, 100, 6 B, 50, 5 B, 50, 4 C, 30, 3 C, 70, 6 The result of my query that I want is this, in any order: PercentOfBar, ZOrder 50, 5 20, 4 30, 3 Think of it like this, if I drew rectangle A, it would fill 100% of the bar and have a z order of 6. 66666666666 AAAAAAAAAAA If I then layed out rectangle B, consisting of two segments, both segments would cover up rectangle A resulting in the following rendering: 4444455555 BBBBBBBBBB As a rule of thumb, for a given rectangle, it's segments should be layed out such that the highest z order is to the right of the lower Z orders. Finally rectangle C would cover up only portions of Rectangle B with it's 30% segment that is z order 3, which would be on the left. You can hopefully see how the is represented in the output dataset I listed above: 3334455555 CCCBBBBBBB Now to make things more complicated I actually have a 4th column such that this grouping occurs for each key: Input: SomeKey, ObjectID, PercentOfBar, ZOrder (where smaller is closer) X, A, 100, 6 X, B, 50, 5 X, B, 50, 4 X, C, 30, 3 X, C, 70, 6 Y, A, 100, 6 Z, B, 50, 2 Z, B, 50, 6 Z, C, 100, 5 Output: SomeKey, PercentOfBar, ZOrder X, 50, 5 X, 20, 4 X, 30, 3 Y, 100, 6 Z, 50, 2 Z, 50, 5 Notice in the output, the PercentOfBar for each SomeKey would add up to 100%. This is one I know I'm going to be thinking about when I go to bed tonight. Just to be explicit and have a question: What would be a query that would produce the results described above?

    Read the article

  • Atomic UPSERT in SQL Server 2005

    - by rabidpebble
    What is the correct pattern for doing an atomic "UPSERT" (UPDATE where exists, INSERT otherwise) in SQL Server 2005? I see a lot of code on SO (e.g. see http://stackoverflow.com/questions/639854/tsql-check-if-a-row-exists-otherwise-insert) with the following two-part pattern: UPDATE ... FROM ... WHERE <condition> -- race condition risk here IF @@ROWCOUNT = 0 INSERT ... or IF (SELECT COUNT(*) FROM ... WHERE <condition>) = 0 -- race condition risk here INSERT ... ELSE UPDATE ... where will be an evaluation of natural keys. None of the above approaches seem to deal well with concurrency. If I cannot have two rows with the same natural key, it seems like all of the above risk inserting rows with the same natural keys in race condition scenarios. I have been using the following approach but I'm surprised not to see it anywhere in people's responses so I'm wondering what is wrong with it: INSERT INTO <table> SELECT <natural keys>, <other stuff...> FROM <table> WHERE NOT EXISTS -- race condition risk here? ( SELECT 1 FROM <table> WHERE <natural keys> ) UPDATE ... WHERE <natural keys> (Note: I'm assuming that rows will not be deleted from this table. Although it would be nice to discuss how to handle the case where they can be deleted -- are transactions the only option? Which level of isolation?) Is this atomic? I can't locate where this would be documented in SQL Server documentation.

    Read the article

  • Fastest way to remove non-numeric characters from a VARCHAR in SQL Server

    - by Dan Herbert
    I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: http://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.

    Read the article

  • Can I select 0 columns in SQL Server?

    - by Woody Zenfell III
    I am hoping this question fares a little better than the similar Create a table without columns. Yes, I am asking about something that will strike most as pointlessly academic. It is easy to produce a SELECT result with 0 rows (but with columns), e.g. SELECT a = 1 WHERE 1 = 0. Is it possible to produce a SELECT result with 0 columns (but with rows)? e.g. something like SELECT NO COLUMNS FROM Foo. (This is not valid T-SQL.) I came across this because I wanted to insert several rows without specifying any column data for any of them. e.g. (SQL Server 2005) CREATE TABLE Bar (id INT NOT NULL IDENTITY PRIMARY KEY) INSERT INTO Bar SELECT NO COLUMNS FROM Foo -- Invalid column name 'NO'. -- An explicit value for the identity column in table 'Bar' can only be specified when a column list is used and IDENTITY_INSERT is ON. One can insert a single row without specifying any column data, e.g. INSERT INTO Foo DEFAULT VALUES. One can query for a count of rows (without retrieving actual column data from the table), e.g. SELECT COUNT(*) FROM Foo. (But that result set, of course, has a column.) I tried things like INSERT INTO Bar () SELECT * FROM Foo -- Parameters supplied for object 'Bar' which is not a function. -- If the parameters are intended as a table hint, a WITH keyword is required. and INSERT INTO Bar DEFAULT VALUES SELECT * FROM Foo -- which is a standalone INSERT statement followed by a standalone SELECT statement. I can do what I need to do a different way, but the apparent lack of consistency in support for degenerate cases surprises me. I read through the relevant sections of BOL and didn't see anything. I was surprised to come up with nothing via Google either.

    Read the article

  • How to Convert using of SqlLit to Simple SQL command in C#

    - by Nasser Hajloo
    I want to get start with DayPilot control I do not use SQLLite and this control documented based on SQLLite. I want to use SQL instead of SQL Lite so if you can, please do this for me. main site with samples http://www.daypilot.org/calendar-tutorial.html The database contains a single table with the following structure CREATE TABLE event ( id VARCHAR(50), name VARCHAR(50), eventstart DATETIME, eventend DATETIME); Loading Events private DataTable dbGetEvents(DateTime start, int days) { SQLiteDataAdapter da = new SQLiteDataAdapter("SELECT [id], [name], [eventstart], [eventend] FROM [event] WHERE NOT (([eventend] <= @start) OR ([eventstart] >= @end))", ConfigurationManager.ConnectionStrings["db"].ConnectionString); da.SelectCommand.Parameters.AddWithValue("start", start); da.SelectCommand.Parameters.AddWithValue("end", start.AddDays(days)); DataTable dt = new DataTable(); da.Fill(dt); return dt; } Update private void dbUpdateEvent(string id, DateTime start, DateTime end) { using (SQLiteConnection con = new SQLiteConnection(ConfigurationManager.ConnectionStrings["db"].ConnectionString)) { con.Open(); SQLiteCommand cmd = new SQLiteCommand("UPDATE [event] SET [eventstart] = @start, [eventend] = @end WHERE [id] = @id", con); cmd.Parameters.AddWithValue("id", id); cmd.Parameters.AddWithValue("start", start); cmd.Parameters.AddWithValue("end", end); cmd.ExecuteNonQuery(); } }

    Read the article

  • Web Apps for Source Code Discussion

    - by Wilco
    Are there any web apps that allow for source code collaboration? I'm thinking of something that could look at an SVN repo/local folder/etc. and publish the code with support for threaded discussions under each file or class. Ideally I want to find something that I could deploy/host myself, so being based in PHP would be a huge plus.

    Read the article

  • format ugly c# source code

    - by Fred F.
    I found a C# game http://www.codeproject.com/KB/game/BattleField.aspx that does what I need to learn. The source code is not formatted good and hard to follow. I used visual studios format document, but the format is still bad. How do I reformat the source code to make it easer to read?

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • integer Max value constants in SQL Server T-SQL?

    - by AaronLS
    Are there any constants in T-SQL like there are in some other languages that provide the max and min values ranges of data types such as int? I have a code table where each row has an upper and lower range column, and I need an entry that represents a range where the upper range is the maximum value an int can hold(sort of like a hackish infinity). I would prefer not to hard code it and instead use something like SET UpperRange = int.Max

    Read the article

  • SQL Server 2005 Reporting Services and the Report Viewer

    - by Kendra
    I am having an issue embedding my report into an aspx page. Here's my setup: 1 Server running SQL Server 2005 and SQL Server 2005 Reporting Services 1 Workstation running XP and VS 2005 The server is not on a domain. Reporting Services is a default installation. I have one report called TestMe in a folder called TestReports using a shared datasource. If I view the report in Report Manager, it renders fine. If I view the report using the http ://myserver/reportserver url it renders fine. If I view the report using the http ://myserver/reportserver?/TestReports/TestMe it renders fine. If I try to view the report using http ://myserver/reportserver/TestReports/TestMe, it just goes to the folder navigation page of the home directory. My web application is impersonating somebody specific to get around the server not being on a domain. When I call the report from the report viewer using http ://myserver/reportserver as the server and /TestReports/TestMe as the path I get this error: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings to false and pass the settings into XmlReader.Create method. When I change the server to http ://myserver/reportserver? I get this error when I run the report: Client found response content type of '', but expected 'text/xml'. The request failed with an empty response. I have been searching for a while and haven't found anything that fixes my issue. Please let me know if there is more information needed. Thanks in advance, Kendra

    Read the article

  • SQL Query slow in .NET application but instantaneous in SQL Server Management Studio

    - by user203882
    Here is the SQL SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.trustaccountlogid = ( SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM' ) Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table. Users: Contains users and their details TrustAccount: A User can have multiple TrustAccounts. TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes. Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared. cmd.CommandTimeout = Configuration.DBTimeout; cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid = (SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))"; cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate; // And then... reader = cmd.ExecuteReader(); if (reader.Read()) { double value = (double)reader.GetValue(0); if (System.Double.IsNaN(value)) return 0; else return value; } else return 0;

    Read the article

  • Code Generation(based on templates) for COCOA

    - by Vikas
    Hi, I have written a library for I-phone which is based upon some object models(whose definitions I get via XML). Now I have one implementation for a sample model ready but to make the code library generic I want to write an application where I can templatize the code and provide placeholders for data model specific points. Is there any tool available for Xcode to enable me do this. In java "Velocity" does this job for me. Regards, Vikas

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • SQL Where Clause Against View

    - by Adam Carr
    I have a view (actually, it's a table valued function, but the observed behavior is the same in both) that inner joins and left outer joins several other tables. When I query this view with a where clause similar to SELECT * FROM [v_MyView] WHERE [Name] like '%Doe, John%' ... the query is very slow, but if I do the following... SELECT * FROM [v_MyView] WHERE [ID] in ( SELECT [ID] FROM [v_MyView] WHERE [Name] like '%Doe, John%' ) it is MUCH faster. The first query is taking at least 2 minutes to return, if not longer where the second query will return in less than 5 seconds. Any suggestions on how I can improve this? If I run the whole command as one SQL statement (without the use of a view) it is very fast as well. I believe this result is because of how a view should behave as a table in that if a view has OUTER JOINS, GROUP BYS or TOP ##, if the where clause was interpreted prior to vs after the execution of the view, the results could differ. My question is why wouldn't SQL optimize my first query to something as efficient as my second query?

    Read the article

  • Filestream in Sql Server 2008 Express

    - by Xaitec
    i tried to get it to work but i never seem to have to luck, i go a code snippet for a blog and still no dice. This is the code. EXEC sp_configure filestream_access_level, 1 GO RECONFIGURE GO CREATE DATABASE NorthPole ON PRIMARY ( NAME = NorthPoleDB, FILENAME = 'C:\Temp\NP\NorthPoleDB.mdf' ), FILEGROUP NorthPoleFS CONTAINS FILESTREAM( NAME = NorthPoleFS, FILENAME = 'C:\Temp\NP\NorthPoleFS') LOG ON ( NAME = NorthPoleLOG, FILENAME = 'C:\Temp\NP\NorthPoleLOG.ldf') GO

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >