Search Results

Search found 36773 results on 1471 pages for 'sql statement syntax'.

Page 325/1471 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Is there a log showing why a Windows server did not restart SQL Server after a reboot?

    - by MerlinMags
    Our server was rebooted after a Windows Update scheduled for 1am, but after the restart SQL Server did not start up, so our websites were unable to display. Usually this process happens with no manual intervention. Is there a log somewhere which might indicate the reason why the Windows startup process did not call SQL Server to get going again? I've looked in the Event Viewer (Application Log) and SQL's own file E:\MSSQL\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG* but these only contain records of successful startup operations....nothing mentions a failed attempt to start a service or anything like that.

    Read the article

  • MS Access ADP front end and SQL Server back end for field data collection?

    - by Brash Equilibrium
    I am an anthropologist. I am going to the field and will use a netbook to collect survey data. The survey forms will need to allow me to enter data into multiple tables, search tables, allow subforms, and be fast enough to not slow down my interview. I have considered storing the data in a SQL Server Express 2008 R2 server (there will be a lot of data) while using a Microsoft Access data project as a front end. To cut down the number of steps required to collect and store data, I'm considering using the netbook for both data storage and collection (after reading this article about SQL Server on a netbook). My questions are: (1) Is there a simpler solution that is also gratis (gratis because I already have a MS Access license from my workplace, and SQL Server Express is, obviously, free)? (2) Does my idea to store and collect data using the netbook make sense? Thank you.

    Read the article

  • What does SQL Server do if you select more than 1 full backup when doing a restore?

    - by Rob Sobers
    I have a backup file that contains 2 backup sets. Both backup sets are full backups. When I open SQL Server Management Studio and choose "Restore..." and pick the file as my device, it lets me pick both backup sets. The restore operation completes without error, but I'm not sure exactly what SQL server did. Did it restore the first one, drop the database, and then restore the second one? Will it always let the most recent full backup prevail? It doesn't seem to make sense for SQL server to even allow you to select more than one full backup.

    Read the article

  • How much does SQL Server Web Edition cost? + other related questions

    - by Goma
    Hello. I visited Microsoft pricing page about SQL Server databases, but it was not that clear for me. I want to know the exact cost of SQL Server Web Edition. Furthermore, I would like to know how can I get it if I am with VPS hosting? Should I install it by myself or will they install it for me? And finally, is there a web host that provide SQL Server Web edition so I pay for them directly with the hosting package?

    Read the article

  • VBA/SQL recordsets

    - by intruesiive
    The project I'm asking about is for sending an email to teachers asking what books they're using for the classes they're teaching next semester, so that the books can be ordered. I have a query that compares the course number of this upcoming semester's classes to the course numbers of historical textbook orders, pulling out only those classes that are being taught this semester. That's where I get lost. I have a table that contains the following: -Professor -Course Number -Year -Book -Title The data looks like this: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 12 1111 Pride and Prejudice smith 11 1222 Infinite Jest smith 10 1333 The Bible smith 13 1333 The Bible smith 12 1222 The Alchemist smith 10 1111 Moby Dick johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 10 1333 Everything is Illuminated johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time johnson 10 1333 Great Expectations johnson 9 1222 Proust on the Shore Here's what I need the code to do "on paper": Group the records by professor. Determine every unique course number in that group, and group records by course number. For each unique course number, determine the highest year associated. Then spit out every record with that professor+course number+year combination. With the sample data, the results would be: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 13 1333 The Bible johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time I'm thinking I should make a record set for each teacher, and within that, another record set for each course number. Within the course number record set, I need the system to determine what the highest year number is - maybe store that in a variable? Then pull out every associated record so that if the teacher ordered 3 books the last time they taught that class (whether it was in 2013 or 2012 and so on) all three books display. I'm not sure I'm thinking of record sets in the right way, though. My SQL so far is basic and clearly doesn't work: SELECT [All].Professor, [All].Course, Max([All].Year) FROM [All] GROUP BY [All].Professor, [All].Course; Thanks for your help.

    Read the article

  • Search for multiple values in an xml column

    - by Yuriy Gettya
    Environment: SQL Server 2012. Primary and secondary (value) index is built on xml column. Say I have a table Message with xml column WordIndex. I also have a table Word which has WordId and WordText. Xml for Message.WordIndex has the following schema: <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.com"> <xs:element name="wi"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="w"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="p" type="xs:unsignedByte" /> </xs:sequence> <xs:attribute name="wid" type="xs:unsignedByte" use="required" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> and some data to go with it: <wi xmlns="http://www.example.com"> <w wid="1"> <p>28</p> <p>72</p> <p>125</p> </w> <w wid="4"> <p>89</p> </w> <w wid="5"> <p>11</p> </w> </wi> I need to search for multiple values in my xml column WordIndex either using OR or AND. What I'm doing is fairly rudimentary, since I'm a n00b in XQuery (taken from debug output, hence real values): with xmlnamespaces(default 'http://www.example.com') select m.Subject, m.MessageId, m.WordIndex.query(' let $dummy := 0 return <word_list> { for $w in /wi/w where $w/@wid=64 return <word wid="64" pos="{data($w/p)}"/> } { for $w in /wi/w where $w/@wid=70 return <word wid="70" pos="{data($w/p)}"/> } { for $w in /wi/w where $w/@wid=63 return <word wid="63" pos="{data($w/p)}"/> } </word_list> ') as WordPosition from Message as m -- more joins go here ... where -- more conditions go here ... and m.WordIndex.exist('/wi/w[@wid=64]') = 1 and m.WordIndex.exist('/wi/w[@wid=70]') = 1 and m.WordIndex.exist('/wi/w[@wid=63]') = 1 How can this be optimized?

    Read the article

  • How do I get ELMAH to work with SQL Server (permission problems)

    - by Gary McGill
    I've got ELMAH working on my (Cassini) development server, and was quite happy with it, but now that I'm trying to move everything to my production server (IIS7), the honeymoon looks like being over. I've got past the "gotcha" with IIS7, which frankly could have been better highlighted in the documentation, and if I just use the in-memory log then it works. However, I'm trying to get it to use the SQL Server log (as I do on my development system), and I'm getting an error along the lines of: The EXECUTE permission was denied on the object ELMAH_GetErrorsXml Well, fine. I know how to grant database permissions, but I'm really struggling to understand which user and which stored procs/tables I need to grant access to. The thing that's really confusing me is that I didn't have to do anything like this to get it to work on my development server. The only difference I can see is that on my development server it seems to connect as NT AUTHORITY\IUSR, whereas on my production server it seems to connect as NT AUTHORITY\NETWORK SERVICE. (It's just using a trusted connection so I've not explicitly configured it to do that - I presume it's to do with the web server). UPDATE: I've since established that because I'm using Cassini, it was actually logging in as me (an admin) and not IUSR, which explains why I didn't get any permission problems. On my development server, the IUSR account is a member of the public database role, and has access to the required database (again as "public"). There's no explicit granting of object-level permissions. [See update above - this is irrelevant]. On my production server, I've added NETWORK SERVICE in exactly the same way (public database role, explicit access to the database as "public"). Yet, I get this permission error. Why?!! [See update above - the only reason I don't get a permission error is because I'm running as an admin]. And, of course, if the fact that it works locally is just "luck", I will need to know which SPs/tables to grant access to. My guess would be all 3 SPs and not the table, but it would be good (again) to see some documentation that makes this explicit.

    Read the article

  • Unable to execute stored Procedure using Java and JDBC

    - by jwmajors81
    I have been trying to execute a MS SQL Server stored procedure via JDBC today and have been unsuccessful thus far. The stored procedure has 1 input and 1 output parameter. With every combination I use when setting up the stored procedure call in code I get an error stating that the stored procedure couldn't be found. I have provided the stored procedure I'm executing below (NOTE: this is vendor code, so I cannot change it). set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROC [dbo].[spWCoTaskIdGen] @OutIdentifier int OUTPUT AS BEGIN DECLARE @HoldPolicyId int DECLARE @PolicyId char(14) IF NOT EXISTS ( SELECT * FROM UniqueIdentifierGen (UPDLOCK) ) INSERT INTO UniqueIdentifierGen VALUES (0) UPDATE UniqueIdentifierGen SET CurIdentifier = CurIdentifier + 1 SELECT @OutIdentifier = (SELECT CurIdentifier FROM UniqueIdentifierGen) END The code looks like: CallableStatement statement = connection .prepareCall("{call dbo.spWCoTaskIdGen(?)}"); statement.setInt(1, 0); ResultSet result = statement.executeQuery(); I get the following error: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried CallableStatement statement = connection .prepareCall("{? = call dbo.spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The above results in: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried: CallableStatement statement = connection .prepareCall("{? = call spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The code above resulted in the following error: Could not find stored procedure 'spWCoTaskIdGen'. Finally, I should also point out the following: I have used the MS SQL Server Management Studio tool and have been able to successfully run the stored procedure. The sql generated to execute the stored procedure is provided below: GO DECLARE @return_value int, @OutIdentifier int EXEC @return_value = [dbo].[spWCoTaskIdGen] @OutIdentifier = @OutIdentifier OUTPUT SELECT @OutIdentifier as N'@OutIdentifier ' SELECT 'Return Value' = @return_value GO The code being executed runs with the same user id that was used in point #1 above. In the code that creates the Connection object I log which database I'm connecting to and the code is connecting to the correct database. Any ideas? Thank you very much in advance.

    Read the article

  • SQL Select - adding field to Select is changing the results

    - by nycdan
    I'm stumped by this SQL problem that I suspect will be easy pickings for someone out there. I have a table that contains rows representing several daily lists of ranked items. The relevent fields are as follows: ID, ListID, ItemID, ItemName, ItemRank, Date. I have a query that returns the items that were on a list yesterday but not today (Items Off List) as follows: Select ItemID, ListID, ItemName, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, COUNT(ItemName) Basically I'm looking for records where the max date is yesterday. It works fine and shows the number of days each item was previously on the list (although not necessarily consecutively, but that's fine for now). The problem is when I try to add ranking to see what yesterday's rank was. I tried the following: Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName, ranking Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, ranking, COUNT(ItemName) This returns a great deal more records than the previous query so something isn't right with it. I want the same number of records, but with the ranking included. I can get the rank by doing a self-join with a subquery and getting records where the ItemID occurs yesterday but not today - but then I don't know how to get the Count any more. Appreciation in advance for any help with this. ======== SOLVED ============== Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list from Table T Where date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 and T.ItemID Not In (select T.ItemID from Table T join Table T2 on T.ItemID = T2.ItemID and T.ListID = T2.ListID where T.date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and T2.date = convert (varchar(10),getdate(),101) and T.ListID = 1) Group by ItemID, ListID, ItemName, ranking Basically, what I did was create a subquery that finds all items that appear in both days, and finds items that appeared yesterday but are not in the set of items that appeared both days. Then I was able to do the aggregate function and grouping correctly. I would NOT be surprised if this is more convoluted than necessary but I understand it and can modify it as needed and performance doesn't seem to be an issue. Thanks everyone for the assist.

    Read the article

  • Testing for existence using SELECT WHERE HAVING and NOT HAVING in a grouped subset

    - by IanC
    I have data on which I need to count +1 if a particular condition exists or another condition doesn't exist. I'm using SQL Server 2008. I shred the following simplified sample XML into a temp table and validate it: <product type="1"> <param type="1"> <item mode="0" weight="1" /> </param> <param type="2"> <item mode="1" weight="1" /> <item mode="0" weight="0.1" /> </param> <param type="3"> <item mode="1" weight="0.75" /> <item mode="1" weight="0.25" /> </param> </product> The validation in concern is the following rule: For each product type, for each param type, mode may be 0 & (1 || 2). In other words, there may be a 0(s), but then 1s or 2s are required, or there may be only 1(s) or 2(s). There cannot be only 0s, and there cannot be 1s and 2s. The only part I haven't figured out is how to detect if there are only 0s. This seems like a "not having" problem. The validation code (for this part): WITH t1 AS ( SELECT SUM(t.ParamWeight) AS S, COUNT(1) AS C, t.ProductTypeID, t.ParamTypeID, t.Mode FROM @t AS t GROUP BY t.ProductTypeID, t.ParamTypeID, t.Mode ), ... UNION ALL SELECT TOP (1) 1 -- only mode 0 & (1 || 2) is allowed FROM t1 WHERE t1.Mode IN (1, 2) GROUP BY t1.ProductTypeID, t1.ParamTypeID HAVING COUNT(1) > 1 UNION ALL ... ) SELECT @C = COUNT(1) FROM t2 This will show if any mode 1s & 2s are mixed, but not if the group contains only a 0. I'm sure there is a simple solution, but it's evading me right now. EDIT: I thought of a "cheat" that works perfectly. I added the following to the above: SELECT TOP (1) 1 -- only mode 0 & (null || 1 || 2) is allowed FROM t1 GROUP BY t1.ProductTypeID, t1.ParamTypeID HAVING SUM(t1.Mode) = 0 However, I'd still like to know how to do this without cheating.

    Read the article

  • Unable to add item to dataset in Linq to SQL

    - by Mike B
    I am having an issue adding an item to my dataset in Linq to SQL. I am using the exact same method in other tables with no problem. I suspect I know the problem but cannot find an answer (I also suspect all i really need is the right search term for Google). Please keep in mind this is a learning project (Although it is in use in a business) I have posted my code and datacontext below. What I am doing is: Create a view model (Relevant bits are shown) and a simple wpf window that allows editing of 3 properties that are bound to the category object. Category is from the datacontext. Edit works fine but add does not. If I check GetChangeSet() just before the db.submitChanges() call there are no adds, edits or deletes. I suspect an issue with the fact that a Category added without a Subcategory would be an orphan but I cannot seem to find the solution. Command code to open window: CategoryViewModel vm = new CategoryViewModel(); AddEditCategoryWindow window = new AddEditCategoryWindow(vm); window.ShowDialog(); ViewModel relevant stuff: public class CategoryViewModel : ViewModelBase { public Category category { get; set; } // Constructor used to Edit a Category public CategoryViewModel(Int16 categoryID) { db = new OITaskManagerDataContext(); category = QueryCategory(categoryID); } // Constructor used to Add a Category public CategoryViewModel() { db = new OITaskManagerDataContext(); category = new Category(); } } The code for saving changes: // Don't close window unless all controls are validated if (!vm.IsValid(this)) return; var changes = vm.db.GetChangeSet(); // DEBUG try { vm.db.SubmitChanges(ConflictMode.ContinueOnConflict); } catch (ChangeConflictException) { vm.db.ChangeConflicts.ResolveAll(RefreshMode.KeepChanges); vm.db.SubmitChanges(); } The Xaml (Edited fror brevity): <TextBox Text="{Binding category.CatName, Mode=TwoWay, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" /> <TextBox Text="{Binding category.CatDescription, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" /> <CheckBox IsChecked="{Binding category.CatIsInactive, Mode=TwoWay}" /> IssCategory in the Issues table is the old, text based category. This field is no longer used and will be removed from the database as soon as this is working and pushed live.

    Read the article

  • linq to sql with nservicebus table lock issue

    - by IGoor
    I am building a system using NServiceBus and my DataLayer is using Linq 2 SQL. The system is made up of 2 services. Service1 receives messages from NSB. It will query Table1 in my database and inserts a record into Table1 If a certain condition is met a new NSB message is sent to the 2nd service Service2 will update records also in Table1 when it receives messages from Service1 and it does some other non database related work. Service2 is a long running process. The problem I am having is the moment Service2 updates a record in Table1, the table is locked. The lock seems to be in place until Service2 has completed all it is processing. i.e. The lock is not released after my datacontext is disposed. This causes the query in Service1 to timeout. Once Service2 completes processing, Service1 resumes processing again without problem. So for example Service1 code may look like: int x =0; using (DataContext db = new DataContext()) { x = (from dp in db.Table1 select dp).Count(); // this line will timeout while service2 is processing Table1 t = new Table1(); t.Data = "test"; db.Table1.InsertOnSubmit(t); db.SubmitChanges(); } if(x % 50 == 0) CallService2(); The code in service2 may look like: using (DataContext db = new DataContext()) { Table1 t = db.Table1.Where(t => t.id == myId); t.Data = "updated"; db.SubmitChanges(); } // I would have expected the lock to have been released at this point, but this is not the case. DoSomeLongRunningTasks(); // lock will be released once service2 exits I don't understand why the lock is not released when the datacontext is disposed in Service2. To get around the problem I have been calling: db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED"); and this works, but I am not happy using it. I want to solve this problem properly. Has any one experienced this sort of problem before and does any one know how to solve it? Why is the lock not released after the datacontext has been disposed? Thanks in advance. p.s. sorry for the extremely long post.

    Read the article

  • Problems Enforcing Referential Integrity on SQL Server Tables

    - by SidC
    Hello All, I have a SQL Server 2005 database comprised of Customer, Quote, QuoteDetail tables. I want/need to enforce referential integrity such that when an insert is made on quotedetail, the quote and customer tables are also affected. I have tried my best to set up primary/foreign keys on my tables but need some help. Here's the scripts for my tables as they stand now (please don't laugh): Customers: USE [Diel_inventory] GO /****** Object: Table [dbo].[Customers] Script Date: 05/08/2010 03:39:04 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Customers]( [pkCustID] [int] IDENTITY(1,1) NOT NULL, [CompanyName] [nvarchar](50) NULL, [Address] [nvarchar](50) NULL, [City] [nvarchar](50) NULL, [State] [nvarchar](2) NULL, [ZipCode] [nvarchar](5) NULL, [OfficePhone] [nvarchar](12) NULL, [OfficeFAX] [nvarchar](12) NULL, [Email] [nvarchar](50) NULL, [PrimaryContactName] [nvarchar](50) NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ([pkCustID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Quotes: USE [Diel_inventory] GO /****** Object: Table [dbo].[Quotes] Script Date: 05/08/2010 03:30:46 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Quotes]( [pkQuoteID] [int] IDENTITY(1,1) NOT NULL, [fkCustomerID] [int] NOT NULL, [QuoteDate] [timestamp] NOT NULL, [NeedbyDate] [datetime] NULL, [QuoteAmt] [decimal](6, 2) NOT NULL, [QuoteApproved] [bit] NOT NULL, [fkOrderID] [int] NOT NULL, CONSTRAINT [PK_Bids] PRIMARY KEY CLUSTERED ( [pkQuoteID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Quotes] WITH CHECK ADD CONSTRAINT [fkCustomerID] FOREIGN KEY([fkCustomerID]) REFERENCES [dbo].[Customers] ([pkCustID]) GO ALTER TABLE [dbo].[Quotes] CHECK CONSTRAINT [fkCustomerID] QuoteDetail: USE [Diel_inventory] GO /****** Object: Table [dbo].[QuoteDetail] Script Date: 05/08/2010 03:31:58 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[QuoteDetail]( [ID] [int] IDENTITY(1,1) NOT NULL, [fkQuoteID] [int] NOT NULL, [fkCustomerID] [int] NOT NULL, [fkPartID] [int] NULL, [PartNumber1] [float] NOT NULL, [Qty1] [int] NOT NULL, [PartNumber2] [float] NULL, [Qty2] [int] NULL, [PartNumber3] [float] NULL, [Qty3] [int] NULL, [PartNumber4] [float] NULL, [Qty4] [int] NULL, [PartNumber5] [float] NULL, [Qty5] [int] NULL, [PartNumber6] [float] NULL, [Qty6] [int] NULL, [PartNumber7] [float] NULL, [Qty7] [int] NULL, [PartNumber8] [float] NULL, [Qty8] [int] NULL, [PartNumber9] [float] NULL, [Qty9] [int] NULL, [PartNumber10] [float] NULL, [Qty10] [int] NULL, [PartNumber11] [float] NULL, [Qty11] [int] NULL, [PartNumber12] [float] NULL, [Qty12] [int] NULL, [PartNumber13] [float] NULL, [Qty13] [int] NULL, [PartNumber14] [float] NULL, [Qty14] [int] NULL, [PartNumber15] [float] NULL, [Qty15] [int] NULL, [PartNumber16] [float] NULL, [Qty16] [int] NULL, [PartNumber17] [float] NULL, [Qty17] [int] NULL, [PartNumber18] [float] NULL, [Qty18] [int] NULL, [PartNumber19] [float] NULL, [Qty19] [int] NULL, [PartNumber20] [float] NULL, [Qty20] [int] NULL, CONSTRAINT [PK_QuoteDetail] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_Customers] FOREIGN KEY ([fkCustomerID]) REFERENCES [dbo].[Customers] ([pkCustID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_Customers] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_PartList] FOREIGN KEY ([fkPartID]) REFERENCES [dbo].[PartList] ([RecID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_PartList] GO ALTER TABLE [dbo].[QuoteDetail] WITH CHECK ADD CONSTRAINT [FK_QuoteDetail_Quotes] FOREIGN KEY([fkQuoteID]) REFERENCES [dbo].[Quotes] ([pkQuoteID]) GO ALTER TABLE [dbo].[QuoteDetail] CHECK CONSTRAINT [FK_QuoteDetail_Quotes] Your advice/guidance on how to set these up so that customer ID in Customers is the same as in Quotes (referential integrity) and that CustomerID is inserted on Quotes and Customers when an insert is made to QuoteDetial would be much appreciated. Thanks, Sid

    Read the article

  • SQL Server architecture guidance

    - by Liam
    Hi, We are designing a new version of our existing product on a new schema. Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database. On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases. The database could grow anywhere from 50-100GB over 5 years. We are Developers and not DBAs so it would be nice to get some general guidance. [I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ] Option 1 Single Main Database [This is my preferred option]. The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck. For reporting we could have a separate reporting DB but this is still being discussed. Option 2 Split the database into 2 separate databases DB1 - Customers, Accounts, Customer resources etc DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc]. These tables would typically contain a lot of data. [It could reside on a separate server if required] This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive. [I'm aware that this option makes it harder for Referential Integrity to be enforced.] Points for consideration As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach. For both options we are considering implementing an archiving database. Apologies for the long Question. In summary the question is 1 DB or 2? Thanks in advance, Liam

    Read the article

  • SQL-How to retrieve the correct data using php

    - by Programatt
    I am new to SQL so please excuse my question if it is simple. I have a database with a few tables. 1 is a users table, the others are application tables that contain the users preferences for receiving notifications about that application based on the country they are interested in. What I want to do, is retrieve the e-mail address of all users that have an interest in that country. I am struggling to think about how to do this. I currently have the following query constructed, and the code to populate the values function check($string) { if (isset($_POST[$string])) { $print = implode(', ', $_POST[$string]); //Converts an array into a single string $imanageSQLArr = Array(); if (substr_count($print,'Benelux') > 0) { $imanageSQLArr[0] = "checked"; } else { $imanageSQLArr[0] = "off"; } if (substr_count($print, 'France') > 0) { $imanageSQLArr[1] = "checked"; } else { $imanageSQLArr[1] = "off"; } if (substr_count($print, 'Germany') > 0) { $imanageSQLArr[2] = "checked"; } else { $imanageSQLArr[2] = "off"; } if (substr_count($print, 'Italy') > 0) { $imanageSQLArr[3] = "checked"; } else { $imanageSQLArr[3] = "off"; } if (substr_count($print, 'Netherlands') > 0) { $imanageSQLArr[4] = "checked"; } else { $imanageSQLArr[4] = "off"; } if (substr_count($print, 'Portugal') > 0) { $imanageSQLArr[5] = "checked"; } else { $imanageSQLArr[5] = "off"; } if (substr_count($print, 'Spain') > 0) { $imanageSQLArr[6] = "checked"; } else { $imanageSQLArr[6] = "off"; } if (substr_count($print, 'Sweden') > 0) { $imanageSQLArr[7] = "checked"; } else { $imanageSQLArr[7] = "off"; } if (substr_count($print, 'Switzerland') > 0) { $imanageSQLArr[8] = "checked"; } else { $imanageSQLArr[8] = "off"; } if (substr_count($print, 'UK') > 0) { $imanageSQLArr[9] = "checked"; } else { $imanageSQLArr[9] = "off"; } and the query $tocheck = $db->prepare( "SELECT users.email FROM users,app WHERE users.id=app.userid AND BENELUX=:BENELUX AND FRANCE=:FRANCE AND GERMANY=:GERMANY AND ITALY=:ITALY AND NETHERLANDS=:NETHERLANDS AND PORTUGAL=:PORTUGAL AND SPAIN=:SPAIN AND SWEDEN=:SWEDEN AND SWITZERLAND=:SWITZERLAND AND UK=:UK"); $tocheck->execute($country); $row = $tocheck->fetchAll(); This does retrieve data, but only people who's preferences match EXACTLY what is put (so what they haven't selected is taken into account as much as what they have). Any help would be greatly appreciated.

    Read the article

  • Issue with SQL query for activity stream/feed

    - by blabus
    I'm building an application that allows users to recommend music to each other, and am having trouble building a query that would return a 'stream' of recommendations that involve both the user themselves, as well as any of the user's friends. This is my table structure: Recommendations ID Sender Recipient [other columns...] -- ------ --------- ------------------ r1 u1 u3 ... r2 u3 u2 ... r3 u4 u3 ... Users ID Email First Name Last Name [other columns...] --- ----- ---------- --------- ------------------ u1 ... ... ... ... u2 ... ... ... ... u3 ... ... ... ... u4 ... ... ... ... Relationships ID Sender Recipient Status [other columns...] --- ------ --------- -------- ------------------ rl1 u1 u2 accepted ... rl2 u3 u1 accepted ... rl3 u1 u4 accepted ... rl4 u3 u2 accepted ... So for user 'u4' (who is friends with 'u1'), I want to query for a 'stream' of recommendations relevant to u4. This stream would include all recommendations in which either the sender or recipient is u4, as well as all recommendations in which the sender or recipient is u1 (the friend). This is what I have for the query so far: SELECT * FROM recommendations WHERE recommendations.sender IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') OR recommendations.recipient IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') UNION SELECT * FROM recommendations WHERE recommendations.sender='u4' OR recommendations.recipient='u4' GROUP BY recommendations.id ORDER BY datecreated DESC Which seems to work, as far as I can see (I'm no SQL expert). It returns all of the records from the Recommendations table that would be 'relevant' to a given user. However, I'm now having trouble also getting data from the Users table as well. The Recommendations table has the sender's and recipient's ID (foreign keys), but I'd also like to get the first and last name of each as well. I think I require some sort of JOIN, but I'm lost on how to proceed, and was looking for help on that. (And also, if anyone sees any areas for improvement in my current query, I'm all ears.) Thanks!

    Read the article

  • two sql queries in one place

    - by Luke
    <?php $results = mysql_query("SELECT * FROM ".TBL_SUB_RESULTS." WHERE user_submitted != '$_SESSION[username]' AND home_user = '$_SESSION[username]' OR away_user = '$_SESSION[username]' ") ; $num_rows = mysql_num_rows($results); if ($num_rows > 0) { while( $row = mysql_fetch_assoc($results)) { extract($row); $q = mysql_query("SELECT name FROM ".TBL_FRIENDLY." WHERE id = '$ccompid'"); while( $row = mysql_fetch_assoc($q)) { extract($row); ?> <table cellspacing="10" style='border: 1px dotted' width="300" bgcolor="#eeeeee"> <tr> <td><b><? echo $name; ?></b></td> </tr><tr> <td width="100"><? echo $home_user; ?></td> <td width="50"><? echo $home_score; ?></td> <td>-</td> <td width="50"><? echo $away_score; ?></td> <td width="100"><? echo $away_user; ?></td> </tr><tr> <td colspan="2"><A HREF="confirmresult.php?fixid=<? echo $fix_id; ?>">Accept / Decline</a></td> </tr></table><br> <? } } } else { echo "<b>You currently have no results awaiting confirmation</b>"; } ?> I am trying to run two queries as you can see. But they aren't both working. Is there a better way to structure this, I am having a brain freeze! Thanks OOOH by the way, my SQL wont stay in this form! I will protect it afterwards

    Read the article

  • Why is is giving me an SQL syntax error?

    - by Tibo
    Do you have any idea why i get this: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '``, `title` varchar(255) collate latin1_general_ci NOT NULL default ``,' at line 3 The code is like this (the part im having problem with...) $sql = 'CREATE TABLE `forum` ( `postid` bigint(20) NOT NULL auto_increment, `author` varchar(255) collate latin1_general_ci NOT NULL default ``, `title` varchar(255) collate latin1_general_ci NOT NULL default ``, `post` mediumtext collate latin1_general_ci NOT NULL, `showtime` varchar(255) collate latin1_general_ci NOT NULL default ``, `realtime` bigint(20) NOT NULL default `0`, `lastposter` varchar(255) collate latin1_general_ci NOT NULL default ``, `numreplies` bigint(20) NOT NULL default `0`, `parentid` bigint(20) NOT NULL default `0`, `lastrepliedto` bigint(20) NOT NULL default `0`, `author_avatar` varchar(30) collate latin1_general_ci NOT NULL default `default`, `type` varchar(2) collate latin1_general_ci NOT NULL default `1`, `stick` varchar(6) collate latin1_general_ci NOT NULL default `0`, `numtopics` bigint(20) NOT NULL default `0`, `cat` bigint(20) NOT NULL, PRIMARY KEY (`postid`) );'; mysql_query($sql,$con) or die(mysql_error()); Help would be greatly appreciated!

    Read the article

  • Why can't we use strong ref cursor with dynamic SQL Statement?

    - by Vineet
    Hi ALL, I am trying to use a strong ref cur with dynamic sql statment but it is giving out an error,but when i use weak cursor it works,Please explain what is the reason and please forward me any link of oracle server architect containing matter about how compilation and parsing is done in Oracle server. THIS is the error along with code. ERROR at line 6: ORA-06550: line 6, column 7: PLS-00455: cursor 'EMP_REF_CUR' cannot be used in dynamic SQL OPEN statement ORA-06550: line 6, column 2: PL/SQL: Statement ignored declare type ref_cur_type IS REF CURSOR RETURN employees%ROWTYPE; --Creating a strong REF cursor,employees is a table emp_ref_cur ref_cur_type; emp_rec employees%ROWTYPE; BEGIN OPEN emp_ref_cur FOR 'SELECT * FROM employees'; LOOP FETCH emp_ref_cur INTO emp_rec; EXIT WHEN emp_ref_cur%NOTFOUND; END lOOP; END;

    Read the article

  • Spatial data in the UK

    - by simonsabin
    I am just loving the fact that the Ordance Survey has now released a huge amount of data that can be used freely. I’ve downloaded the Panorama (tm) data http://www.ordnancesurvey.co.uk/oswebsite/products/land-form-panorama-contours/index.html . which is all the contours for the UK This I’ve loaded into SQL Server using Safe Computing’s FME ( http://www.safe.com/ ). This is because the data is a Autocad DXF file and translating that to SQL Server spatial data is not easy. The FME workbench is not...(read more)

    Read the article

  • SQL Saturday Richmond, VA

    - by Mike
    Very excited to announce that I’ll be holding 2 sessions at SQL Saturday in VA on April 10th. If there are any frequent readers of SQLTeam.com attending, please make sure to say hi! Topics I’m covering are partitioning & loading data real time and an introduction to performance tuning. Hope to see you there! SQL Saturday Richmond Schedule

    Read the article

  • Working with Temporal Data in SQL Server

    - by Dejan Sarka
    My third Pluralsight course, Working with Temporal Data in SQL Server, is published. I am really proud on the second part of the course, where I discuss optimization of temporal queries. This was a nearly impossible task for decades. First solutions appeared only lately. I present all together six solutions (and one more that is not a solution), and I invented four of them. http://pluralsight.com/training/Courses/TableOfContents/working-with-temporal-data-sql-server

    Read the article

  • Making Use of Plan Explorer in my own Environment

    - by Jonathan Kehayias
    Back in October 2010, I briefly blogged about the SQL Sentry Plan Explorer in my blog post wrap up for SQL Bits 7 and how impressed I was with what I saw from a Alpha demo standpoint from Greg Gonzalez ( Blog | Twitter ) while I was at SQLBits 7 in York.  To be 100% honest and transparent, Greg gave me early access to this tool after discussing it at SQLBits 7, and I had the opportunity to test a number of pre-Beta releases where I was able to offer significant feedback and submit bugs in the...(read more)

    Read the article

  • Some thoughts on the Virtualization Feedback in the SSWUG Newsletters

    - by Jonathan Kehayias
    Last Thursday, March 25, 2010, the topic of Virtualization of SQL Server came up in the SSWUG Newsletter , with Steven Wynkoop asking if peoples perceptions and experiences have changed since the last time he covered virtualizing SQL Server.  I unfortunately missed the last coverage of this topic, but it appears from the newsletter that there was a general consensus that “low-traffic solution could be fine, but if you had a heavy hitting application, the net advise was to avoid a virtual environment...(read more)

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >