Search Results

Search found 27342 results on 1094 pages for 'sql denali'.

Page 615/1094 | < Previous Page | 611 612 613 614 615 616 617 618 619 620 621 622  | Next Page >

  • IDENTITY_INSERT is set to off error

    - by kingrichard2005
    I have a MVC web application with a table in the model that I would like to add to. I have the primary key set along with the other data fields, but every time I try to add to the table, I get the following error: "Cannot insert explicit value for identity column in table 'TABLE_NAME' when IDENTITY_INSERT is set to OFF." I'm not sure why this problem is coming up, I have the primary key set as the identity and it is also set to auto increment in the Visual Studio table designer. Is there any way I can adjust the IDENTITY_INSERT parameter in the table designer in Visual Studio?? Or is there some other issue that might be causing this.

    Read the article

  • How to select all parent objects into DataContext using single LINQ query ?

    - by too
    I am looking for an answer to a specific problem of fetching whole LINQ object hierarchy using single SELECT. At first I was trying to fill as much LINQ objects as possible using LoadOptions, but AFAIK this method allows only single table to be linked in one query using LoadWith. So I have invented a solution to forcibly set all parent objects of entity which of list is to be fetched, although there is a problem of multiple SELECTS going to database - a single query results in two SELECTS with the same parameters in the same LINQ context. For this question I have simplified this query to popular invoice example: public static class Extensions { public static IEnumerable<T> ForEach<T>(this IEnumerable<T> collection, Action<T> func) { foreach(var c in collection) { func(c); } return collection; } } public IEnumerable<Entry> GetResults(AppDataContext context, int CustomerId) { return ( from entry in context.Entries join invoice in context.Invoices on entry.EntryInvoiceId equals invoice.InvoiceId join period in context.Periods on invoice.InvoicePeriodId equals period.PeriodId // LEFT OUTER JOIN, store is not mandatory join store in context.Stores on entry.EntryStoreId equals store.StoreId into condStore from store in condStore.DefaultIfEmpty() where (invoice.InvoiceCustomerId = CustomerId) orderby entry.EntryPrice descending select new { Entry = entry, Invoice = invoice, Period = period, Store = store } ).ForEach(x => { x.Entry.Invoice = Invoice; x.Invoice.Period = Period; x.Entry.Store = Store; } ).Select(x => x.Entry); } When calling this function and traversing through result set, for example: var entries = GetResults(this.Context); int withoutStore = 0; foreach(var k in entries) { if(k.EntryStoreId == null) withoutStore++; } the resulting query to database looks like (single result is fetched): SELECT [t0].[EntryId], [t0].[EntryInvoiceId], [t0].[EntryStoreId], [t0].[EntryProductId], [t0].[EntryQuantity], [t0].[EntryPrice], [t1].[InvoiceId], [t1].[InvoiceCustomerId], [t1].[InvoiceDate], [t1].[InvoicePeriodId], [t2].[PeriodId], [t2].[PeriodName], [t2].[PeriodDateFrom], [t4].[StoreId], [t4].[StoreName] FROM [Entry] AS [t0] INNER JOIN [Invoice] AS [t1] ON [t0].[EntryInvoiceId] = [t1].[InvoiceId] INNER JOIN [Period] AS [t2] ON [t2].[PeriodId] = [t1].[InvoicePeriodId] LEFT OUTER JOIN ( SELECT 1 AS [test], [t3].[StoreId], [t3].[StoreName] FROM [Store] AS [t3] ) AS [t4] ON [t4].[StoreId] = ([t0].[EntryStoreId]) WHERE (([t1].[InvoiceCustomerId]) = @p0) ORDER BY [t0].[InvoicePrice] DESC -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [186] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1 SELECT [t0].[EntryId], [t0].[EntryInvoiceId], [t0].[EntryStoreId], [t0].[EntryProductId], [t0].[EntryQuantity], [t0].[EntryPrice], [t1].[InvoiceId], [t1].[InvoiceCustomerId], [t1].[InvoiceDate], [t1].[InvoicePeriodId], [t2].[PeriodId], [t2].[PeriodName], [t2].[PeriodDateFrom], [t4].[StoreId], [t4].[StoreName] FROM [Entry] AS [t0] INNER JOIN [Invoice] AS [t1] ON [t0].[EntryInvoiceId] = [t1].[InvoiceId] INNER JOIN [Period] AS [t2] ON [t2].[PeriodId] = [t1].[InvoicePeriodId] LEFT OUTER JOIN ( SELECT 1 AS [test], [t3].[StoreId], [t3].[StoreName] FROM [Store] AS [t3] ) AS [t4] ON [t4].[StoreId] = ([t0].[EntryStoreId]) WHERE (([t1].[InvoiceCustomerId]) = @p0) ORDER BY [t0].[InvoicePrice] DESC -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [186] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1 The question is why there are two queries and how can I fetch LINQ objects without such hacks?

    Read the article

  • How do I perform 'WHERE' on groups of rows?

    - by Drew
    I have a table, which looks like: +-----------+----------+ + person_id + group_id + +-----------+----------+ + 1 + 10 + + 1 + 20 + + 1 + 30 + + 2 + 10 + + 2 + 20 + + 3 + 10 + +-----------+----------+ I need a query such that only person_ids with groups 10 AND 20 AND 30 are returned (only person_id: 1). I am not sure how to do this, as from what I can see it would require me to group the rows by person_id and then select the rows which contain all group_ids. I'm looking for something which will preserve the use of keys without resorting to string operations on group_concat() or such.

    Read the article

  • Analysis Services with excel as front end - is it possible to get the nicer UI that powerpivot provi

    - by AJM
    I have been looking into PowerPivot and concluded that for "self service BI" and ahoc buidling of cubes it has its uses. In particular I like the enhanced UI that you get from using PowerPivot rather than just using a PivotTable hooked up to an analysis services datasource. However it seems that hooking up PowerPivot to an existing analysis services cube is not a solution for "organisational BI". It is not always desireable to suck millions of rows into excel at once and the interface between PowerPivot and analysis services is very poor in my book. Hence the question is can an existing analysis services solution get the enhanced ui features that power pivot brings, withoout using powerpivot as the design tool? If powerpivot is aimed ad self service/personal BI then it seems bizare that the UI for this is better than for bigger/more costly analysis services solutions.

    Read the article

  • The Next-gen Databases

    - by Randin
    I'm learning traditional Relational Databases (with PostgreSQL) and doing some research I've come across some new types of databases. CouchDB, Drizzle, and Scalaris to name a few, what is going to be the next database technologies to deal with?

    Read the article

  • Copying Some from a PostgreSQL Server to Another

    - by whollychao
    I am in need of an application that can periodically transmit select rows from a PostgreSQL database across a network to a second PostgreSQL server. Typically these will be the most recent row added, pulled and transmitted every 10-30 seconds. The primary servers run in a MS Windows environment with a high-latency, and occasionally intermittent, network connection. Therefore, any application would have to be tolerant of this and ideally automatically reconnect / resend data that could not be transmitted. Due to the environment and the requirements, a full-blown replication package would be unnecessary. I appreciate any help anyone has with this problem.

    Read the article

  • Varchar columns: Nullable or not.

    - by NYSystemsAnalyst
    The database development standards in our organization state the varchar fields should not allow null values. They should have a default value of an empty string (""). I know this makes querying and concatenation easier, but today, one of my coworkers questioned me about why that standard only existed for varchar types an not other datatypes (int, datetime, etc). I would like to know if others consider this to be a valid, defensible standard, or if varchar should be treated the same as fields of other data types? I believe this standard is valid for the following reason: I believe that an empty string and null values, though technically different, are conceptually the same. An empty, zero length string is a string that does not exist. It has no value. However, a numeric value of 0 is not the same as NULL. For example, if a field called OutstandingBalance has a value of 0, it means there are $0.00 remaining. However, if the same field is NULL, that means the value is unknown. On the other hand, a field called CustomerName with a value of "" is basically the same as a value of NULL because both represent the non-existence of the name. I read somewhere that an analogy for an empty string vs. NULL is that of a blank CD vs. no CD. However, I believe this to be a false analogy because a blank CD still phyically exists and still has physical data space that does not have any meaningful data written to it. Basically, I believe a blank CD is the equivalent of a string of blank spaces (" "), not an empty string. Therefore, I believe a string of blank spaces to be an actual value separate from NULL, but an empty string to be the absense of value conceptually equivalent to NULL. Please let me know if my beliefs regarding variable length strings are valid, or please enlighten me if they are not. I have read several blogs / arguments regarding this subject, but still do not see a true conceptual difference between NULLs and empty strings.

    Read the article

  • Need help with a conditional SELECT statement

    - by Ethan
    I've got a stored procedure with a select statement, like this: `SELECT author_ID, author_name, author_bio FROM Authors WHERE author_ID in (SELECT author_ID from Books) ` This limits results to authors who have book records. This is the Books table: Books book_ID INT author_ID INT book_title NVARCHAR featured_book BIT What I want to do is conditionally select the ID of the featured book by each author as part of the select statement above, and if none of the books for a given author are featured, select the ID of the first (top 1) book by the author from the books table. How do I approach this?

    Read the article

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

  • Are GUID primary keys bad in theory, or just practice?

    - by Yarin
    Whenever I design a database I automatically start with an auto-generating GUID primary key for each of my tables (excepting look-up tables) I know I'll never lose sleep over duplicate keys, merging tables, etc. To me it just makes sense philosophically that any given record should be unique across all domains, and that that uniqueness should be represented in a consistent way from table to table. I realize it will never be the most performant option, but putting performance aside, I'd like to know if there are philosophical arguments against this practice?

    Read the article

  • When using Query Syntax in C# "Enumeration yielded no results". How to retrieve output

    - by Shantanu Gupta
    I have created this query to fetch some result from database. Here is my table structure. What exaclty is happening. DtMapGuestDepartment as Table 1 DtDepartment as Table 2 Are being used var dept_list= from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select dept.Field<string>("DEPARTMENT_ID"); I am performing this query on DataTables and expect it to return me a datatable. Here I want to select distinct department from Table 1 as well which will be my next quest. Please answer to that also if possible.

    Read the article

  • MYSQL DELETE from a table [closed]

    - by Hossein
    Possible Duplicate: MySQL DELETE in a single table Hi, I have this table: userurltag(id,user,Url,tag) I want to remove rows that contain urls that are used by only one user, can someone help me? It seems that DELETE...(SELECT...) is not supported in Mysql.

    Read the article

  • how do i connect to MSSQL 2008 database in JAVA with jbbc

    - by shuxer
    Hello I have MSSQL 2008 installed on my local PC, and my java application needs to connect to a mssql database. I am a new to MSSQL and i would like get some help on creating user login for my java application and getting connection via jdbc. So far i tried to create a user login for my app and used following connection string, but i doest work at all. Any help and hint will be appreciated. jdbc:jtds:sqlserver://127.0.0.1:1433/dotcms username="shuxer" password="itarator" Thanks in advance.

    Read the article

  • Linq for two tables

    - by Diana
    I need to do something like this, My two tables have the same signature, but different class so It suppose to work but it is not working. var myTable; if (booleanVariable == true) { myTable = table1; } else { myTable = table2; } var myLinq1 = from p in myTable join r in myOtherTable select p; In this case, I have to initialize myTable I have tried also, var myTablev= table2; if (booleanVariable == true) { myTable = table1; } var myLinq1 = from p in myTable join r in myOtherTable select p; then var is type table2, then it can't be changed to table1 type. I need help, I don't want to make a copy paste of all the code. the linq query is huge, and it s nested with 5 or 6 queries. also I have to do this on 12 different methods. Thanks a lot for your help.

    Read the article

  • How to get an id from the results in two tables

    - by Chris Lively
    Consider an order. An order will have one or more line items. Each line item is for a particular product. Given a filter table with a couple of products, how would I get the order id's that had at least all of the products listed in the second table? table Orders( OrderId int ) table LineItems ( OrderId int, LineItemId int, ProductId int ) table Filter ( ProductId int ) data Orders OrderId -------- 1 2 3 LineItems OrderId LineItemId ProductId ------- ---------- --------- 1 1 401 1 2 502 2 3 401 3 4 401 3 5 603 3 6 714 Filter ProductId --------- 401 603 Desired result of the query: OrderId: 3

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • Using application roles with DataReader

    - by Shahar
    I have an application that should use an application role from the database. I'm trying to make this work with queries that are actually run using Subsonic (2). To do this, I created my own DataProvider, which inherits from Subsonic's SqlDataProvider. It overrides the CreateConnection function, and calls sp_appsetrole to set the application role after the connection is created. This part works fine, and I'm able to get data using the application role. The problem comes when I try to unset the application role. I couldn't find any place in the code where my provider is called after the query is done, so I tried to add my own, by changing SubSonic code. The problem is that Subsonic uses a data reader. It loads data from the data reader, and then closes it. If I unset the application role before the data reader is closed, I get an error saying: There is already an open DataReader associated with this Command which must be closed first. If I unset the application role after the data reader is closed, I get an error saying ExecuteNonQuery requires an open and available Connection. The connection's current state is closed. I can't seem to find a way to close the data reader without closing the connection.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • SqlServer2008 + expensive union all

    - by Tim Mahy
    Hi al, we have 5 tables over which we should query with user search input throughout a stored procedure. We do a union all of the similar data inside a view. Because of this the view can not be materialized. We are not able to change these 5 tables drastically (like creating a 6th table that contains the similar data of the 5 tables and reference that new one from the 5 tables). The query is rather expensive / slow what are our other options? It's allowed to think outside the box. Unfortunately I cannot give more information like the table/view/SP definition because of customer confidentiality... greetings, Tim

    Read the article

  • Querying with foreign key

    - by theactiveactor
    Say I have 2 tables whose structures are as follows: tableA id | A1 | A2 tableB id | tableA_id (foreign key) | B1 Entries in A have a one-to-many relationship with entries in B. What kind of query operation would I need to achieve "something like this: select all objects from table B where A1="foo""? Basically, apply a query on tableA and from those result, find corresponding dependent objects in tableB

    Read the article

  • MDX Year on Year Sales by Months

    - by schone
    Hi all, I'm stuck on a MDX query, I'm trying to retrieve the following results: [Time].[2009] [Time].[2010] [Time].[Months].Members [Measures].[Sales] [Measures].[Sales] So I would like to compare the sales which were in 2009 against 2010 by month. In terms of a chart I would have two series one for 2009 and 2010, the y axis would be the sales value and the x axis would be the month. Any ideas on how to approach this? Thanks in advance

    Read the article

  • What are the reasons *not* to use a GUID for a primary key?

    - by Yarin
    Whenever I design a database I automatically start with an auto-generating GUID primary key for each of my tables (excepting look-up tables) I know I'll never lose sleep over duplicate keys, merging tables, etc. To me it just makes sense philosophically that any given record should be unique across all domains, and that that uniqueness should be represented in a consistent way from table to table. I realize it will never be the most performant option, but putting performance aside, I'd like to know if there are philosophical arguments against this practice?

    Read the article

< Previous Page | 611 612 613 614 615 616 617 618 619 620 621 622  | Next Page >