Search Results

Search found 28052 results on 1123 pages for 't sql tuesday'.

Page 599/1123 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • Order by descending based on condition

    - by Vinni
    Hello All, I want to write a LINQ to Entity query which does order by ascending or descending based on input parameter, Is there any way for that. Following is the my code. Please suggest. public List<Hosters_HostingProviderDetail> GetPendingApproval(SortOrder sortOrder) { List<Hosters_HostingProviderDetail> returnList = new List<Hosters_HostingProviderDetail>(); int pendingStateId = Convert.ToInt32(State.Pending); //If the sort order is ascending if (sortOrder == SortOrder.ASC) { var hosters = from e in context.Hosters_HostingProviderDetail where e.ActiveStatusID == pendingStateId orderby e.HostingProviderName ascending select e; returnList = hosters.ToList<Hosters_HostingProviderDetail>(); return returnList; } else { var hosters = from e in context.Hosters_HostingProviderDetail where e.StateID == pendingStateId orderby e.HostingProviderName descending select e; returnList = hosters.ToList<Hosters_HostingProviderDetail>(); return returnList; } }

    Read the article

  • What is the proper way to create a recursive entity in the Entity Framework?

    - by Orion Adrian
    I'm currently using VS 2010 RC, and I'm trying to create a model that contains a recursive self-referencing entity. Currently when I import the entity from the model I get an error indicating that the parent property cannot be part of the association because it's set to 'Computed' or 'Identity', though I'm not sure why it does it that way. I've been hand-editing the file to get around that error, but then the model simply doesn't work. What is the proper way to get recursive entities to work in the Entity Framework. CREATE TABLE [dbo].[Appointments]( [AppointmentId] [int] IDENTITY(1,1) NOT NULL, [Description] [nvarchar](1024) NULL, [Start] [datetime] NOT NULL, [End] [datetime] NOT NULL, [Username] [varchar](50) NOT NULL, [RecurrenceRule] [nvarchar](1024) NULL, [RecurrenceState] [varchar](20) NULL, [RecurrenceParentId] [int] NULL, [Annotations] [nvarchar](50) NULL, [Application] [nvarchar](100) NOT NULL, CONSTRAINT [PK_Appointments] PRIMARY KEY CLUSTERED ( [AppointmentId] ASC ) ) GO ALTER TABLE [dbo].[Appointments] WITH CHECK ADD CONSTRAINT [FK_Appointments_ParentAppointments] FOREIGN KEY([RecurrenceParentId]) REFERENCES [dbo].[Appointments] ([AppointmentId]) GO ALTER TABLE [dbo].[Appointments] CHECK CONSTRAINT [FK_Appointments_ParentAppointments] GO

    Read the article

  • Sending changes from multiple tables in disconnected dataset to SQLServer...

    - by Stecy
    We have a third party application that accept calls using an XML RPC mechanism for calling stored procs. We send a ZIP-compressed dataset containing multiple tables with a bunch of update/delete/insert using this mechanism. On the other end, a CLR sproc decompress the data and gets the dataset. Then, the following code gets executed: using (var conn = new SqlConnection("context connection=true")) { if (conn.State == ConnectionState.Closed) conn.Open(); try { foreach (DataTable table in ds.Tables) { string columnList = ""; for (int i = 0; i < table.Columns.Count; i++) { if (i == 0) columnList = table.Columns[0].ColumnName; else columnList += "," + table.Columns[i].ColumnName; } var da = new SqlDataAdapter("SELECT " + columnList + " FROM " + table.TableName, conn); var builder = new SqlCommandBuilder(da); builder.ConflictOption = ConflictOption.OverwriteChanges; da.RowUpdating += onUpdatingRow; da.Update(ds, table.TableName); } } catch (....) { ..... } } Here's the event handler for the RowUpdating event: public static void onUpdatingRow(object sender, SqlRowUpdatingEventArgs e) { if ((e.StatementType == StatementType.Update) && (e.Command == null)) { e.Command = CreateUpdateCommand(e.Row, sender as SqlDataAdapter); e.Status = UpdateStatus.Continue; } } and the CreateUpdateCommand method: private static SqlCommand CreateUpdateCommand(DataRow row, SqlDataAdapter da) { string whereClause = ""; string setClause = ""; SqlConnection conn = da.SelectCommand.Connection; for (int i = 0; i < row.Table.Columns.Count; i++) { char quoted; if ((row.Table.Columns[i].DataType == Type.GetType("System.String")) || (row.Table.Columns[i].DataType == Type.GetType("System.DateTime"))) quoted = '\''; else quoted = ' '; string val = row[i].ToString(); if (row.Table.Columns[i].DataType == Type.GetType("System.Boolean")) val = (bool)row[i] ? "1" : "0"; bool isPrimaryKey = false; for (int j = 0; j < row.Table.PrimaryKey.Length; j++) { if (row.Table.PrimaryKey[j].ColumnName == row.Table.Columns[i].ColumnName) { if (whereClause != "") whereClause += " AND "; if (row[i] == DBNull.Value) whereClause += row.Table.Columns[i].ColumnName + "=NULL"; else whereClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; isPrimaryKey = true; break; } } /* Only values for column that is not a primary key can be modified */ if (!isPrimaryKey) { if (setClause != "") setClause += ", "; if (row[i] == DBNull.Value) setClause += row.Table.Columns[i].ColumnName + "=NULL"; else setClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; } } return new SqlCommand("UPDATE " + row.Table.TableName + " SET " + setClause + " WHERE " + whereClause, conn); } However, this is really slow when we have a lot of records. Is there a way to optimize this or an entirely different way to send lots of udpate/delete on several tables? I would really much like to use TSQL for this but can't figure a way to send a dataset to a regular sproc. Additional notes: We cannot directly access the SQLServer database. We tried without compression and it was slower.

    Read the article

  • How the return type is determined in a method that uses Linq2Sql?

    - by Richard77
    Hello, I've usually have difficulties with return types when it comes to linq. I'll explain by the following examples. Let's say I have a Table Products with ProductID, Name, Category, and Price as columns : 1) IQueryable<Product public IQueryable<Product> GetChildrenProducts() { return (from pd in db.Products where pd.Category == "Children" select pd); } 2) Product public Product GetProduct(int id) { return (from pd in db.Products where pd.ProductID == id select pd).FirstOrDefault(); } Now, if I decide to select, for instance, only one column (Price or Name) or even 2 or 3 columns (Name and Price), but in any case, less than the 4 columns, what's going to be the return type? I mean this: public returnType GetSomeInformation() { return (from pd in db.Products select new { pd.Name, pd.Price } } What SHOULD BE the returnType for the GetSomeInformation()? Thanks for helping

    Read the article

  • Select count() max() Date

    - by DAVID
    I have a table with shifts history along with emp ids. I'm using this query to retrieve a list of employees and their total shifts by specifying the range to count from: SELECT ope_id, count(ope_id) FROM operator_shift WHERE ope_shift_date >=to_date( '01-MAR-10','dd-mon-yy') and ope_shift_date <= to_date('31-MAR-10','dd-mon-yy') GROUP BY OPE_ID which gives OPE_ID COUNT(OPE_ID) 1 14 2 7 3 6 4 6 5 2 6 5 7 2 8 1 9 2 10 4 10 rows selected. How do I choose the employee with the highest number of shifts under the specified range date?

    Read the article

  • Using Linq2Sql to insert data into multiple tables using an auto incremented primary key

    - by Thomas
    I have a Customer table with a Primary key (int auto increment) and an Address table with a foreign key to the Customer table. I am trying to insert both rows into the database in one nice transaction. using (DatabaseDataContext db = new DatabaseDataContext()) { Customer newCustomer = new Customer() { Email = customer.Email }; Address b = new Address() { CustomerID = newCustomer.CustomerID, Address1 = billingAddress.Address1 }; db.Customers.InsertOnSubmit(newCustomer); db.Addresses.InsertOnSubmit(b); db.SubmitChanges(); } When I run this I was hoping that the Customer and Address table automatically had the correct keys in the database since the context knows this is an auto incremented key and will do two inserts with the right key in both tables. The only way I can get this to work would be to do SubmitChanges() on the Customer object first then create the address and do SubmitChanges() on that as well. This would create two roundtrips to the database and I would like to see if I can do this in one transaction. Is it possible? Thanks

    Read the article

  • Violation of PRIMARY KEY constraint 'PK_

    - by abhi
    hi am trying to write a code in which i need to perform a update but on primary keys how do i achieve it i have written the following code: kindly look at it let me know where m wrong Protected Sub rgKMSLoc_UpdateCommand(ByVal source As Object, ByVal e As Telerik.Web.UI.GridCommandEventArgs) Handles rgKMSLoc.UpdateCommand Try KAYAReqConn.Open() If TypeOf e.Item Is GridEditableItem Then Dim strItemID As String = CType(e.Item.FindControl("hdnID"), HiddenField).Value Dim strrcmbLocation As String = CType(e.Item.FindControl("rcmbLocation"), RadComboBox).SelectedValue Dim strKubeLocation As String = CType(e.Item.FindControl("txtKubeLocation"), TextBox).Text Dim strCSVCode As String = CType(e.Item.FindControl("txtCSVCode"), TextBox).Text SQLCmd = New SqlCommand("SELECT * FROM MstKMSLocKubeLocMapping WHERE LocationID= '" & rcmbLocation.SelectedValue & "'", KAYAReqConn) Dim dr As SqlDataReader dr = SQLCmd.ExecuteReader If dr.HasRows Then lblMsgWarning.Text = "<font color=red>""User ID Already Exists" Exit Sub End If dr.Close() SQLCmd = New SqlCommand("UPDATE MstKMSLocKubeLocMapping SET LocationID=@Location,KubeLocation=@KubeLocation,CSVCode=@CSVCode WHERE LocationID = '" & strItemID & "'", KAYAReqConn) SQLCmd.Parameters.AddWithValue("@Location", Replace(strrcmbLocation, "'", "''")) SQLCmd.Parameters.AddWithValue("@KubeLocation", Replace(strKubeLocation, "'", "''")) SQLCmd.Parameters.AddWithValue("@CSVCode", Replace(strCSVCode, "'", "''")) SQLCmd.Parameters.AddWithValue("@Status", "A") SQLCmd.ExecuteNonQuery() lblMessageUpdate.Text = "<font color=blue>""Record Updated SuccessFully" SQLCmd.Dispose() rgKMSLoc.Rebind() End If Catch ex As Exception Response.Write(ex.ToString) Finally KAYAReqConn.Close() End Try End Sub this is my designer page' Location: ' / ' DataSourceID="dsrcmbLocation" DataTextField="Location" DataValueField="LocationID" Height="150px" Kube Location: ' Class="forTextBox" MaxLength="4" onkeypress="return filterInput(2,event);" CSV Code: ' Class="forTextBox" MaxLength="4" onkeypress="return filterInput(2,event);" <tr class="tableRow"> <td colspan="2" align="center" class="tableCell"> <asp:ImageButton ID="btnUpdate" runat="server" CommandName="Update" CausesValidation="true" ValidationGroup="Update" ImageUrl="~/Images/update.gif"></asp:ImageButton> <asp:ImageButton ID="btnCancel" runat="server" CausesValidation="false" CommandName="Cancel" ImageUrl="~/Images/No.gif"></asp:ImageButton> </td> </tr> </table> </FormTemplate> </EditFormSettings> Locationid is my primary key

    Read the article

  • Tables with no Primary Key

    - by Matt Hamilton
    I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?

    Read the article

  • How to add if statement in SQL ?

    - by shin
    I have a following MySQL with two parameters, $catname and $limit=1. And it is working fine. SELECT P.*, C.Name AS CatName FROM omc_product AS P LEFT JOIN omc_category AS C ON C.id = P.category_id WHERE C.Name = '$catname' AND p.status = 'active' ORDER BY RAND() LIMIT 0, $limit Now I want to add another parameter $order. $order can be either ODER BY RAND() or ORDER BY product_order in the table omc_product. Could anyone tell me how to write this query please? Thanks in advance.

    Read the article

  • Why Banks or Financial Companies prefer Oracle than other RDBMS for their "Core" systems?

    - by edwin.nathaniel
    I'd like to know why most Banks or Financial companies prefer Oracle than other RDBMS for their core systems (the absolutely minimum features that a Bank must support). I found a few answers that didn't satisfy me. For example: Oracle has more features. But features for what? Can't you implement that in application level if you were not using Oracle? Could someone please describe a bit more technical but still on high-level overview of what the bank needs and how Oracle would solve it and the others can't or don't have the features yet? I came from the web-app (web 2.0) crowd who normally hear news about MySQL, PostgreSQL or even key-value/column-oriented storage solution. I have almost zero knowledge on how Banks or Financial companies operates from technical perspective. Thank you, Ed

    Read the article

  • Unique identifiers for users

    - by Christopher McCann
    If I have a table of a hundred users normally I would just set up an auto-increment userID column as the primary key. But if suddenly we have a million users or 5 million users then that becomes really difficult because I would want to start becoming more distributed in which case an auto-increment primary key would be useless as each node would be creating the same primary keys. Is the solution to this to use natural primary keys? I am having a real hard time thinking of a natural primary key for this bunch of users. The problem is they are all young people so they do not have national insurance numbers or any other unique identifier I can think of. I could create a multi-column primary key but there is still a chance, however miniscule of duplicates occurring. Does anyone know of a solution? Thanks

    Read the article

  • Oracle - Parameterized Query has EXECUTIONS = PARSE_CALLS

    - by Cory Grimster
    We have a .NET application talking to Oracle 10g. Our DBA recently pulled a list of queries where executions is equal to parse_calls. We assumed that this would help us find all of the unparameterized queries in our code. Unexpectedly, the following query showed up near the top of this list, with 1,436,169 executions and 1,436,151 parses: SELECT bar.foocolumn FROM bartable bar, baztable baz WHERE bar.some_id = :someId AND baz.another_id = :anotherId AND baz.some_date BETWEEN bar.start_date AND (nvl(bar.end_date, baz.some_date + (1/84600)) - (1/84600)) Why is executions equal to parse_calls for this query?

    Read the article

  • Sqlserver 2005 Replication issue

    - by Francesco
    Hi, I have a peer to peer Merge Replication from two Sqlserver 2005. The first Sqlserver is both the publisher and distribuitor. All works fine but if the VPN goes down for a couple of hours the replication goes down too and I need to manually restart the sqlserver agent. In the Sqlserver Agent properties I have set the two option about the agent failover but nothing. How I can set an automatic restart of sqlserver agent when the VPN goes down? Thanks

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • Saving data to server with user accounts.

    - by AKRamkumar
    Ok, so for an app I am making, I want the user to be able to save data online. On my website, I will provide a web server with tables of UserName/Password/SaveData. How can I do this without crashing the server load? How can I guarantee security ? Is there a Design Pattern for this?Is there a better way of doing this? This is going to be a free application, available to the public and I would like for their settings to be available, no matter the computer they are using. Is there a better way of doing this? I am using MEF for plugins so is there a way I can save plugin data as well?

    Read the article

  • Linked Measure Groups and Local Dimensions

    - by ekoner
    Mulling over something I've been reading up on. According to Chris Webb, A linked measure group can only be used with dimensions from the same database as the source measure group. So I took this to mean as long as two cubes share a database, a linked measure group can be used with a dimension. So I created a new cube and added a local measure group, a local dimension and a linked measure group. However, I can't create a relationship between the linked measure group and the local dimension even though they are within the same database. I get the message below: Regular relationships in the current database between non-linked (local) dimensions and linked measure groups cannot be edited. These relationship can only be created through the wizard. This dialog can be used to delete these relationships. I see that I can go to the original cube and add the dimension there, but does the message below mean I have an alternative? I just know it's going to be something simple and trivial! Thanks for reading.

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • Database Modeling - Either/Or in Many-to-Many

    - by EkoostikMartin
    I have an either/or type of situation in a many-to-many relationship I'm trying to model. So I have these tables: Message ---- *MessageID MessageText Employee ---- *EmployeeID EmployeeName Team ---- *TeamID TeamName MessageTarget ---- MessageID EmployeeID (nullable) TeamID (nullable) So, a Message can have either a list of Employees, or a list of Teams as a MessageTarget. Is the MessageTarget table I have above the best way to implement this relationship? What constraints can I place on the MessageTarget effectively? How should I create a primary key on MessageTarget table?

    Read the article

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >