Search Results

Search found 111084 results on 4444 pages for 'linq to sql code generato'.

Page 63/4444 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • SQL Server: Is it possible to prevent SQL Agent from failing a step on error?

    - by Kenneth
    I have a stored procedure that runs custom backups for around 60 SQL servers (mixes 2000 through 2008R2). Occasionally, due to issues outside of my control (backup device inaccessible, network error, etc.) an individual backup on one or two databases will fail. This causes this entire step to fail, which means any subsequent backup commands are not executed and half of the databases on a given server may not be backed up. On the 2005+ boxes I am using TRY/CATCH blocks to manage these problems and continue backing up the remaining databases. On a 2000 server however, for example, I have no way to prevent this error from failing the entire step: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'db-diff(\PATH\DB-DIFF-03-16-2010.DIF)'. Operating system error 5(Access is denied.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. I am simply asking if anything like TRY/CATCH is possible in SQL 2000? I realize there are no built in methods for this, so I guess I am looking for some creativity. Even when wrapping each backup (or any failing statement) via sp_executesql the job fails instantly. Example: DECLARE @x INT, @iReturn INT PRINT 'Executing statement that will fail with 208.' EXEC @iReturn = Sp_executesql N'SELECT * from TABLETHATDOESNTEXIST;' PRINT Cast(@iReturn AS NVARCHAR) --In SSMS this return code prints. Executed as a job it fails and aborts before this statement.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • How I shoud use BIT in MS SQL 2005

    - by adopilot
    Regarding to SQL performance. I have Scalar-Valued function for checking some specific condition in base, It returns BIT value True or False, I now do not know how I should fill @BIT parameter If I write. set @bit = convert(bit,1) or set @bit = 1 or set @bit='true' Function will work anyway but I do not know which method is recommended for daily use. Another Question, I have table in my base with around 4 million records, Daily insert is about 4K records in that table. Now I want to add CONSTRAINT on that table whit scalar valued function that I mentioned already Something like this ALTER TABLE fin_stavke ADD CONSTRAINT fin_stavke_knjizenje CHECK ( dbo.fn_ado_chk_fin(id)=convert(bit,1)) Where is filed "id" primary key of table fin_stavke and dbo.fn_ado_chk_fin looks like create FUNCTION fn_ado_chk_fin ( @stavka_id int ) RETURNS bit AS BEGIN declare @bit bit if exists (select * from fin_stavke where id=@stavka_id and doc_id is null and protocol_id is null) begin set @bit=0 end else begin set @bit=1 end return @bit; END GO Will this type and method of cheeking constraint will affect badly performance on my table and SQL at all ? If there is also better way to add control on this table please let me know.

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • SQL Server Blocking Issue

    - by Robin Weston
    We currently have an issue that occurs roughly once a day on SQL 2005 database server, although the time it happens is not consistent. Basically, the database grinds to a halt, and starts refusing connections with the following error message. This includes logging into SSMS: A connection was successfully established with the server, but then an error occurred during the login process. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) Our CPU usage for SQL is usually around 15%, but when the DB is in it's broken state it's around 70%, so it's clearly doing something, even if no-one can connect. Even if I disable the web app that uses the database the CPU still doesn't go down. I am unable to restart the SQLSERVER process as it is unresponsive, so I have to end up killing the process manually, which then puts the DB into Suspect/Recovery mode (which I can fix but it's a pain). Below are some PerfMon stats I gathered when the DB was in it's broken state which might help. I have a bunch more if people want to request them: Active Transactions: 2 (Never Changes) Logical Connections: 34 (NC) Process Blocked: 16 (NC) User Connections: 30 (NC) Batch Request: 0 (NC) Active Jobs: 2 (NC) Log Truncations: 596 (NC) Log Shrinks: 24 (NC) Longest Running Transaction Time: 99 (NC) I guess they key is finding out what the DB is using it's CPU on, but as I can't even log into SSMS this isn't possible with the standard methods. Disturbingly, I can't even use the dedicated admin connection to get into SSMS. I get the same timout as with all other requests. Any advice, reccomendations, or even sympathy, is much appreciated!

    Read the article

  • After Delete Trigger Fires Only After Delete?

    - by Brandi
    I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation... I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it. By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete. So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone? All the trigger looks like is this: ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger] GO The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.

    Read the article

  • Getting a linq table to be dynamically sent to a method

    - by Damian Spaulding
    I have a procedure: var Edit = (from R in Linq.Products where R.ID == RecordID select R).Single(); That I would like to make "Linq.Products" Dynamic. Something like: protected void Page_Load(object sender, EventArgs e) { something(Linq.Products); } public void something(Object MyObject) { System.Data.Linq.Table<Product> Dynamic = (System.Data.Linq.Table<Product>)MyObject; var Edit = (from R in Dynamic where R.ID == RecordID select R).Single(); My problem is that I my "something" method will not be able to know what table has been sent to it. so the static line: System.Data.Linq.Table Dynamic = (System.Data.Linq.Table)MyObject; Would have to reflect somthing like: System.Data.Linq.Table Dynamic = (System.Data.Linq.Table)MyObject; With being a dynamic catch all variable so that Linq can just execute the code just like I hand coded it statically. I have been pulling my hair out with this one. Please help.

    Read the article

  • LINQ/LAMBDA filter query by date [on hold]

    - by inquisitive_one
    I'm trying to use LINQ to SQL to retrieve earnings data for a particular date range. Currently the table is set up as follows: Comp Eps Year Quarter IBM .5 2012 2 IBM .65 2012 3 IBM .60 2012 4 IBM .5 2011 2 IBM .7 2013 1 IBM .8 2013 2 Except for Eps, all fields have a data type of string or char. Eps has a data type of double. Here's my code: var myData = myTable .Where(t => t.Comp.Equals("IBM") && Convert.Int32(string.Format("{0}{1}", t.Year, t.Quarter)) <= 20131); I get the following error when I tried that code: Method 'System.String Format(System.String, System.Object, System.Object)' has no supported translation to SQL How can I select all Eps that has a year & quarter less than "20132" using a lambda expression?

    Read the article

  • SQL Group by Minute- Expanded

    - by Barnie
    I am working on something similar to this post here: TS SQL - group by minute However mine is pulling from an message queue, and I need to see an accurate count of the amount of traffic the Message Queue is creating/ sending, and at what time Select * From MessageQueue mq My expanded version of this though is the following: A) User defines a start time and an end time (Easy enough using Declare's @StartTime and @EndTime B) Give the user the option of choosing the "grouping". Will it be broken out by 1 minutes, 5 minutes, 15 minutes, or 30 minutes (Max). (I had thought to do this with a CASE statement, but my test problems fall apart on me.) C) Display the data to accurately show a count of what happened during the interval (Grouping) selected. This is where I am at so far SQL Blob: DECLARE @StartTime datetime DECLARE @EndTime datetime SELECT DATEPART(n, mq.cre_date)/5 as Time --Trying to just sort by 5 minute intervals ,CONVERT(VARCHAR(10),mq.Cre_Date,101) ,COUNT(*) as results FROM dbo.MessageQueue mq WHERE mq.cre_date BETWEEN @StartDate AND @EndDate GROUP BY DATEPART(n, mq.cre_date)/5 --Trying to just sort by 5 minute intervals , eq.Cre_Date This is the output I would like to achieve: [Time] [Date] [Message Count] 1300 06/26/2012 5 1305 06/26/2012 1 1310 06/26/2012 100

    Read the article

  • SQL Server 22005: Top 1 * for a unique column?

    - by Echilon
    I have data in a table (below), and I need to select the most recent update from each user. Here the data has been sorted by date, so the 'SomeData' column of the most recent unique value of each user. Top 1 SomeData isn't going to work because it will only return for one user. Is this even possible using only SQL? Date SomeData User ... 8/5/2010 2.2 UserC 4/5/2010 1.1 UserA 3/5/2010 9.4 UserB 1/5/2010 3.7 UserA 1/5/2010 6.1 UserB

    Read the article

  • How to move the files of a replicated database (SQL Server 2008 R2) to a different drive

    - by ileon
    I would appreciate if someone could help me with the following problem: We use two SQL Server 2008 R2 databases under transactional replication: transactional publication with updatable subscriptions. because we run out of disk space we need to move the database files into a new drive. But I don't want to break the replication. What I'm looking for are the required steps that will help me to move the files to the new drive. Thanks

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • XDocument + IEnumerable is causing out of memory exception in System.Xml.Linq.dll

    - by Manatherin
    Basically I have a program which, when it starts loads a list of files (as FileInfo) and for each file in the list it loads a XML document (as XDocument). The program then reads data out of it into a container class (storing as IEnumerables), at which point the XDocument goes out of scope. The program then exports the data from the container class to a database. After the export the container class goes out of scope, however, the garbage collector isn't clearing up the container class which, because its storing as IEnumerable, seems to lead to the XDocument staying in memory (Not sure if this is the reason but the task manager is showing the memory from the XDocument isn't being freed). As the program is looping through multiple files eventually the program is throwing a out of memory exception. To mitigate this ive ended up using System.GC.Collect(); to force the garbage collector to run after the container goes out of scope. this is working but my questions are: Is this the right thing to do? (Forcing the garbage collector to run seems a bit odd) Is there a better way to make sure the XDocument memory is being disposed? Could there be a different reason, other than the IEnumerable, that the document memory isnt being freed? Thanks. Edit: Code Samples: Container Class: public IEnumerable<CustomClassOne> CustomClassOne { get; set; } public IEnumerable<CustomClassTwo> CustomClassTwo { get; set; } public IEnumerable<CustomClassThree> CustomClassThree { get; set; } ... public IEnumerable<CustomClassNine> CustomClassNine { get; set; }</code></pre> Custom Class: public long VariableOne { get; set; } public int VariableTwo { get; set; } public DateTime VariableThree { get; set; } ... Anyway that's the basic structures really. The Custom Classes are populated through the container class from the XML document. The filled structures themselves use very little memory. A container class is filled from one XML document, goes out of scope, the next document is then loaded e.g. public static void ExportAll(IEnumerable<FileInfo> files) { foreach (FileInfo file in files) { ExportFile(file); //Temporary to clear memory System.GC.Collect(); } } private static void ExportFile(FileInfo file) { ContainerClass containerClass = Reader.ReadXMLDocument(file); ExportContainerClass(containerClass); //Export simply dumps the data from the container class into a database //Container Class (and any passed container classes) goes out of scope at end of export } public static ContainerClass ReadXMLDocument(FileInfo fileToRead) { XDocument document = GetXDocument(fileToRead); var containerClass = new ContainerClass(); //ForEach customClass in containerClass //Read all data for customClass from XDocument return containerClass; } Forgot to mention this bit (not sure if its relevent), the files can be compressed as .gz so I have the GetXDocument() method to load it private static XDocument GetXDocument(FileInfo fileToRead) { XDocument document; using (FileStream fileStream = new FileStream(fileToRead.FullName, FileMode.Open, FileAccess.Read, FileShare.Read)) { if (String.Compare(fileToRead.Extension, ".gz", true) == 0) { using (GZipStream zipStream = new GZipStream(fileStream, CompressionMode.Decompress)) { document = XDocument.Load(zipStream); } } else { document = XDocument.Load(fileStream); } return document; } } Hope this is enough information. Thanks Edit: The System.GC.Collect() is not working 100% of the time, sometimes the program seems to retain the XDocument, anyone have any idea why this might be?

    Read the article

  • C#: Fill DataGridView From Anonymous Linq Query

    - by mdvaldosta
    // From my form BindingSource bs = new BindingSource(); private void fillStudentGrid() { bs.DataSource = Admin.GetStudents(); dgViewStudents.DataSource = bs; } // From the Admin class public static List<Student> GetStudents() { DojoDBDataContext conn = new DojoDBDataContext(); var query = (from s in conn.Students select new Student { ID = s.ID, FirstName = s.FirstName, LastName = s.LastName, Belt = s.Belt }).ToList(); return query; } I'm trying to fill a datagridview control in Winforms, and I only want a few of the values. The code compiles, but throws a runtime error: Explicit construction of entity type 'DojoManagement.Student' in query is not allowed. Is there a way to get it working in this manner?

    Read the article

  • Fill WinForms DataGridView From Anonymous Linq Query

    - by mdvaldosta
    // From my form BindingSource bs = new BindingSource(); private void fillStudentGrid() { bs.DataSource = Admin.GetStudents(); dgViewStudents.DataSource = bs; } // From the Admin class public static List<Student> GetStudents() { DojoDBDataContext conn = new DojoDBDataContext(); var query = (from s in conn.Students select new Student { ID = s.ID, FirstName = s.FirstName, LastName = s.LastName, Belt = s.Belt }).ToList(); return query; } I'm trying to fill a datagridview control in Winforms, and I only want a few of the values. The code compiles, but throws a runtime error: Explicit construction of entity type 'DojoManagement.Student' in query is not allowed. Is there a way to get it working in this manner?

    Read the article

  • Using LINQ Group By to return new XElements

    - by Jon
    I have the following code and got myself confused: I have a query that returns a set of records that have been identified as duplicates and I then want to create a XElement for each one. This should be done in one query I think but I'm now lost. var f = (from x in MyDocument.Descendants("RECORD") where itemsThatWasDuplicated.Contains((int)x.Element("DOCUMENTID")) group x by x.Element("DOCUMENTID").Value into g let item = g.Skip(1) //Ignore first as that is the valid one select item ); var errorQuery = (from x in f let sequenceNumber = x.Element("DOCUMENTID").Value let detail = "Sequence number " + sequenceNumber + " was read more than once" select new XElement("ERROR", new XElement("DATETIME", time), new XElement("DETAIL", detail), new XAttribute("TYPE", "DUP"), new XElement("ID", x.Element("ID").Value) ) );

    Read the article

  • Build XML document using Linq To XML

    - by JasonDR
    Given the following code: string xml = ""; //alternativley: string xml = "<people />"; XDocument xDoc = null; if (!string.IsNullOrEmpty(xml)) { xDoc = XDocument.Parse(xml); xDoc.Element("people").Add( new XElement("person", "p 1") ); } else { xDoc = new XDocument(); xDoc.Add(new XElement("people", new XElement("person", "p 1") )); } As you can see, if the xml variable is blank, I need to create the rood node manually, and append the person the root node, whereas if it is not, I simple add to the people element My question is, is there any way to generically create the document, where it will add all referenced node automatically if they do not already exists?

    Read the article

  • LLBLGEN: Linq to LLBGEN don't work

    - by StreamT
    I want to make custom select from the database table using Linq. We use LLBGEN as ORM solution. I can't do LINQ query to Entities Collection Class unless I call GetMulti(null) method of it. Is it possible to do LINQ query to LLBGEN without extracting all table first? BatchCollection batches = new BatchCollection(); BatchEntity batch = batches.AsQueryable() .Where(i => i.RegisterID == 3) .FirstOrDefault(); // Exception: Sequence don't contains any elements batches = new BatchCollection(); batches.GetMulti(null); // I don't want to extract the whole table. BatchEntity batch = batches.AsQueryable() .Where(i => i.RegisterID == 3) .FirstOrDefault(); //Works fine

    Read the article

  • Linq to sql Error "Data at the root level is invalid " for nullable xml column

    - by Mukesh
    I have Created a table like CREATE TABLE [dbo].[tab1]( [Id] [int] NOT NULL, [Name] [varchar](100) NOT NULL, [Meta] [xml] NULL, CONSTRAINT [PK_tab1] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] When I am doing linq to sql query to fetch a data it throw an error "data at the root level is invalid linq". In further investigation I come to know that the meta column is null in that case. In real it is nullable Do I have to remove the nullable and set some blank root node as default or there is some another way to get rid of the error. My linq Query which throws error var obj1= (from obj in dbContext.tab1s where obj.id== 123 select obj).FirstOrDefault<Tab1>();

    Read the article

  • Chain LINQ IQueryable, and end with Stored Procedure

    - by Alex
    I'm chaining search criteria in my application through IQueryable extension methods, e.g.: public static IQueryable<Fish> AtAge (this IQueryable<Fish> fish, Int32 age) { return fish.Where(f => f.Age == age); } However, I also have a full text search stored procedure: CREATE PROCEDURE [dbo].[Fishes_FullTextSearch] @searchtext nvarchar(4000), @limitcount int AS SELECT Fishes.* FROM Fishes INNER JOIN CONTAINSTABLE(Fishes, *, @searchtext, @limitcount) AS KEY_TBL ON Fishes.Id = KEY_TBL.[KEY] ORDER BY KEY_TBL.[Rank] The stored procedure obviously doesn't return IQueryable, however, is it possible to somehow limit the result set for the stored procedure using IQueryable's? I'm envisioning something like .AtAge(5).AboveWeight(100).Fishes_FulltextSearch("abc"). In this case, the fulltext search should execute on a smaller subset of my Fishes table (narrowed by Age and Weight). Is something like this possible? Sample code?

    Read the article

  • LINQ to SQL:DataContext.SubmitChanges not updating immediately

    - by aximili
    I have a funny problem. Doing DataContext.SubmitChanges() updates Count() in one way but not in the other, see my comment in the code below.(DC is the DataContext) var compliances = c.DataCompliances.Where(x => x.ComplianceCriteria.FKElement == e.Id); if (compliances.Count() == 0) // Insert if not exists { DC.DataCompliances.InsertOnSubmit(new DataCompliance { FKCompany = c.Id, FKComplianceCriteria = criteria.Id }); DC.SubmitChanges(); compliances = c.DataCompliances.Where(x => x.ComplianceCriteria.FKElement == e.Id); // At this point DC.DataCompliances.Count() has increased, // but compliances.Count() is still 0 // When I refresh the page however, it will be 1 } Why does that happen? I need to update compliances after inserting one. Does anyone have a solution?

    Read the article

  • Difference in linq-to-sql query performance using GenericRespositry

    - by Neil
    Given i have a class like so in my Data Layer public class GenericRepository<TEntity> where TEntity : class { [System.ComponentModel.DataObjectMethod(System.ComponentModel.DataObjectMethodType.Select)] public IQueryable<TEntity> SelectAll() { return DataContext.GetTable<TEntity>(); } } I would be able to query a table in my database like so from a higher layer using (GenericRepositry<MyTable> mytable = new GenericRepositry<MyTable>()) { var myresult = from m in mytable.SelectAll() where m.IsActive select m; } is this considerably slower than using the usual code in my Data Layer using (MyDataContext ctx = new MyDataContext()) { var myresult = from m in ctx.MyTable where m.IsActive select m; } Eliminating the need to write simple single table selects in the Data layer saves a lot of time, but will i regret it?

    Read the article

  • Conversion of Linq expressions

    - by Arnis L.
    I'm not sure how exactly argument what I'm trying to achieve, therefore - wrote some code: public class Foo{ public Bar Bar{get;set;} } public class Bar{ public string Fizz{get;set;} } public class Facts{ [Fact] public void fact(){ Assert.Equal(expectedExp(),barToFoo(barExp())); } private Expression<Func<Foo,bool>> expectedExp(){ return f=>f.Bar.Fizz=="fizz"; } private Expression<Func<Bar,bool>> barExp(){ return b=>b.Fizz=="fizz"; } private Expression<Func<Foo,bool>> barToFoo (Expression<Func<Bar,bool>> barExp){ return Voodoo(barExp); //<-------------------------------------------??? } } Is this even possible?

    Read the article

  • Comparing 2 linq applications: Unexpected result

    - by lukesky
    I drafted 2 ASP.NET applications using LINQ. One connects to MS SQL Server, another to some proprietary memory structure. Both applications work with tables of 3 int fields, having 500 000 records (the memory structure is identical to SQL Server table). The controls used are regular: GridView and ObjectDataSource. In the applications I calculate the average time needed for each paging click processing. LINQ + MS SQL application demands 0.1 sec per page change. LINQ + Memory Structure demands 0.8 sec per page change. This Is shocking result. Why the application handling data in memory works 8 times slower than the application using hard drive? Can anybody tell me why that happens?

    Read the article

  • Linq query with Array in where clause?

    - by Matt Dell
    I have searched for this, but still can't seem to get this to work for me. I have an array of Id's associated with a user (their Organization Id). These are placed in an int[] as follows: int[] OrgIds = (from oh in this.Database.OrganizationsHierarchies join o in this.Database.Organizations on oh.OrganizationsId equals o.Id where (oh.Hierarchy.Contains(@OrgId)) || (oh.OrganizationsId == Id) select o.Id).ToArray(); The code there isn't very important, but it shows that I am getting an integer array from a Linq query. From this, though, I want to run another Linq query that gets a list of Personnel, that code is as follows: List<Personnel> query = (from p in this.Database.Personnels where (search the array) select p).ToList(); I want to add in the where clause a way to select only the users with the OrganizationId's in the array. So, in SQL where I would do something like "where OrganizationId = '12' or OrganizationId = '13' or OrganizatonId = '17'." Can I do this fairly easily in Linq / .NET?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >