Search Results

Search found 40744 results on 1630 pages for 'sql interview questions a'.

Page 118/1630 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Sql 2008 r2 DEV install failed

    - by obulay
    I am trying to install Sql 2008 r2 DEVeloper on Windows 2008 STD. It previously had Sql 2008 Enterprise installed. I first uninstalled SQL 2008. I think uninstall still leaves some crap in registery, and in Program files. Installation fails in the last step of the process - installation, about couple of minutes into it. Can't insert picture for you because serverfault does not allow me to do so. But it basically it: Error message: "Installation failed" I re-installed SQL 2008 Enterprise on this box and we are not going to go to R2 on older servers that previously had SQL 2005 or SQL 2008 on it. Looking at Windows log: Activation context generation failed for "C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\SQLServer2008R2\x64\Microsoft.SqlServer.Configuration.SqlServer_ConfigExtension.dll". Dependent Assembly Microsoft.VC80.CRT,processorArchitecture="amd64",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="8.0.50727.4027" could not be found. Please use sxstrace.exe for detailed diagnosis.

    Read the article

  • Allow and restrict remote sql server access

    - by Michel
    Hi, I want to expose my sql server instance via the internet. I've been programming asp.net to sql server for a long time, but for the first time i'm hosting the sql server myself instead of the clients server. So what i want to do is move my sql server from my dev machine at home to a virtual server (yet to hire). But of course i don't want anyone to just enter my sql server but just a few persons. So what i was thinking was to allow only a few ip addresses to the sql server instance. Can anyone tell me how i can expose my sql server to the internet and limit the access to the instance to only a few ip addresses? And ehm, if you know even better ways to secure it, i'd be happy, because this is the first time for me :) Michel

    Read the article

  • Is there a way to delay compilation of a stored procedure's execution plan?

    - by Ian Henry
    (At first glance this may look like a duplicate of http://stackoverflow.com/questions/421275 or http://stackoverflow.com/questions/414336, but my actual question is a bit different) Alright, this one's had me stumped for a few hours. My example here is ridiculously abstracted, so I doubt it will be possible to recreate locally, but it provides context for my question (Also, I'm running SQL Server 2005). I have a stored procedure with basically two steps, constructing a temp table, populating it with very few rows, and then querying a very large table joining against that temp table. It has multiple parameters, but the most relevant is a datetime "@MinDate." Essentially: create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @MinDate If I just execute this as a normal query, by declaring @MinDate as a local variable and running that, it produces an optimal execution plan that executes very quickly (first joins on #smallTable and then only considers a very small subset of rows from aGiantTable while doing other operations). It seems to realize that #smallTable is tiny, so it would be efficient to start with it. This is good. However, if I make that a stored procedure with @MinDate as a parameter, it produces a completely inefficient execution plan. (I am recompiling it each time, so it's not a bad cached plan...at least, I sure hope it's not) But here's where it gets weird. If I change the proc to the following: declare @LocalMinDate datetime set @LocalMinDate = @MinDate --where @MinDate is still a parameter create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @LocalMinDate Then it gives me the efficient plan! So my theory is this: when executing as a plain query (not as a stored procedure), it waits to construct the execution plan for the expensive query until the last minute, so the query optimizer knows that #smallTable is small and uses that information to give the efficient plan. But when executing as a stored procedure, it creates the entire execution plan at once, thus it can't use this bit of information to optimize the plan. But why does using the locally declared variables change this? Why does that delay the creation of the execution plan? Is that actually what's happening? If so, is there a way to force delayed compilation (if that indeed is what's going on here) even when not using local variables in this way? More generally, does anyone have sources on when the execution plan is created for each step of a stored procedure? Googling hasn't provided any helpful information, but I don't think I'm looking for the right thing. Or is my theory just completely unfounded? Edit: Since posting, I've learned of parameter sniffing, and I assume this is what's causing the execution plan to compile prematurely (unless stored procedures indeed compile all at once), so my question remains -- can you force the delay? Or disable the sniffing entirely? The question is academic, since I can force a more efficient plan by replacing the select * from aGiantTable with select * from (select * from aGiantTable where ID in (select ID from #smallTable)) as aGiantTable Or just sucking it up and masking the parameters, but still, this inconsistency has me pretty curious.

    Read the article

  • Managing Scripts in Oracle SQL Developer

    - by thatjeffsmith
    You backup your databases, right? You backup you home computer – your media collection, tax documents, bank accounts, etc, right? You backup your handy-dandy SQL scripts, right? Ok, now that I’ve got your head nodding, I want to answer a question I get every so often: How can I manage my scripts in SQL Developer? This is an interesting question. First, it assumes that one SHOULD manage their scripts in their IDE. Now, what I think the question generally gets around to is, how can we: Navigate to our scripts Open them Execute them What a good IDE should have is an interface to your existing Version Control System (VCS.) SQL Developer supports out-of-the-box both Subversion and Git. You can also download an extension via check-for-updates to get support for CVS. Now, what I’m about to show you COULD be done without versioning and controlling your scripts – but I want to ask you why you wouldn’t want to do this? So, I’m going to proceed and assume that you do INDEED version your scripts already. Seeing what scripts you’ve already got in your repository This is very straightforward – just open the Team Versions panel. Then connect to your repository. Shows you the files in your source control system. Now, I could ‘preview’ said file right away. If I open the file from here, we get a temp file copy down from the server to the local machine. This is a local temp copy of the controlled script – I can read/execute, but not write to it. And that might be all you need. But, if your script calls other scripts, then you’re going to want to check out the server copy of your stuff down your local SVN working copy directory. That way when your script calls another script – you’re executing the PRODUCTION APPROVED copies of said scripts. And if you do SPOOL or other file I/O stuff, it will work as expected. To get to those said client copies of your scripts… Enter the Files Panel The Files panel is accessible from the View menu. You can get to your files, one of two ways. If you’ve touched the file recently, you can see it under the Recent tree. Otherwise, you can navigate to your local ‘checked out’ copies of your script(s). Open your local copies, see what’s changed, etc. And I can access the change history and see what’s been touched… What changes am I going to ‘push out’ if I commit this back to the server? Most of us work on teams, yes? This panel also gives me a heads up if someone else is making changes to the same file. I can see the ‘incoming’ changes as well. To Sum It Up… If I want to get a script to run: do a full get to your local directory open the script(s) The files panel will tell you if your local copy is out of date from the server and if you have made local changes you’ve forgotten to commit back up to the server and your fellow teammates. Now, if you’re the selfish type and don’t want to share, that’s fine. But you should still be backing up your scripts, and you can still use the Files panel to manage your scripts.

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Nested SQL Select statement fails on SQL Server 2000, ok on SQL Server 2005

    - by Jay
    Here is the query: INSERT INTO @TempTable SELECT UserID, Name, Address1 = (SELECT TOP 1 [Address] FROM (SELECT TOP 1 [Address] FROM [UserAddress] ua INNER JOIN UserAddressOrder uo ON ua.UserID = uo.UserID WHERE ua.UserID = u.UserID ORDER BY uo.AddressOrder ASC) q ORDER BY AddressOrder DESC), Address2 = (SELECT TOP 1 [Address] FROM (SELECT TOP 2 [Address] FROM [UserAddress] ua INNER JOIN UserAddressOrder uo ON ua.UserID = uo.UserID WHERE ua.UserID = u.UserID ORDER BY uo.AddressOrder ASC) q ORDER BY AddressOrder DESC) FROM User u In this scenario, users have multiple address definitions, with an integer field specifying the preferred order. "Address2" (the second preferred address) attempts to take the top two preferred addresses, order them descending, then take the top one from the result. You might say, just use a subquery which does a SELECT for the record with "2" in the Order field, but the Order values are not contiguous. How can this be rewritten to conform to SQL 2000's limitations? Very much appreciated.

    Read the article

  • Putting data from local SQL database to remote SQL database without remote SQL access enabled (PHP)

    - by Shyam
    Hi, I have a local database, and all the tables are defined. Eventually I need to publish my data remotely, which I can do easily with PHPmyadmin. Problem however is that my remote host doesn't allow remote SQL connections at all, so writing a script that does a mysqldump and run it through a client (which would've been ideal) won't help me here. Since the schema won't change, but the data will, I need some kind of PHP client that works "reverse". My question is if such a client exists and what would be recommended to use (by experience). I just need an one way trip here, from my local database (Rails) to the remote database (supports PHP), preferable as simple and slick as possible. Thank you for your replies, comments and feedback!

    Read the article

  • linq to sql -> join

    - by ile
    SQL query: SELECT ArticleCategories.Title AS Category, Articles.Title, Articles.[Content], Articles.Date FROM ArticleCategories INNER JOIN Articles ON ArticleCategories.CategoryID = Articles.CategoryID Object repository: public class ArticleRepository { private DB db = new DB(); // // Query Methods public IQueryable<Article> FindAllArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID select new { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } .... } And finally, I get this error: Error 1 Cannot implicitly convert type 'System.Linq.IQueryable' to 'System.Linq.IQueryable'. An explicit conversion exists (are you missing a cast?) C:\Documents and Settings\ilija\My Documents\Visual Studio 2008\Projects\CMS\CMS\Models\ArticleRepository.cs 29 20 CMS Any idea what did I do wrong? Thanks, Ile

    Read the article

  • SQL Server 2008 DBNETLIB error

    - by Joseph
    Hi all Our ASp.net applications getting error as below" [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied " But i can connect with Enterprise manager management studio and Query analaizer with out any issue. It was running these applications with out any issue long time. last one week we are getting this error.If we restart the server .it works then it will come again after 3to 4 hours. We are running on Windows 2003 server. I was seraching and didnt find a solution yet. If anybody knows anything for this error, please post the details to resolve. Thank you in Advance Joseph

    Read the article

  • Updating Xml attributes with new values in a SQL Server 2008 table

    - by SMD
    I have a table in SQL Server 2008 that it has some columns. One of these columns is in Xml format and I want to update some attributes. For example my Xml column's name is XmlText and it's value in 5 first rows is such as: <Identification Name="John" Family="Brown" Age="30" /> <Identification Name="Smith" Family="Johnson" Age="35" /> <Identification Name="Jessy" Family="Albert" Age="60" /> <Identification Name="Mike" Family="Brown" Age="23" /> <Identification Name="Sarah" Family="Johnson" Age="30" /> and I want to change all Age attributes that are 30 to 40 such as below: <Identification Name="John" Family="Brown" Age="40" /> <Identification Name="Smith" Family="Johnson" Age="35" /> <Identification Name="Jessy" Family="Albert" Age="60" /> <Identification Name="Mike" Family="Brown" Age="23" /> <Identification Name="Sarah" Family="Johnson" Age="40" />

    Read the article

  • Table fragmentation in SQL server

    - by Joseph
    Hi Anybody have an idea about Table fragmentation in SQL Server(not Index fragmentation).We have a table ,this is the main table and its not storing any data permently,dat come here and goes out continusly. there is no index on this because only insert and delete staements are running frequently. recently We face a huge delay for the response from this table. If we select anything it tooks more than 2 to 5 minuts to return result,even there is very few datas. At last we delete and recreate this table,and now its working very fine. Appreciate If any comments,how this is happening ? Joseph

    Read the article

  • To store images in SQL or not?

    - by Brett
    Generally, I had thought it was always better to store images in the filesystem and link to it via the database entry. However, I am trying to optimize my db design and am having a few questions. My images are all really small thumbmails in black and white (not greyscale, but true B&W) and are 70x70 in size. If we take the images (which is basically a 2D array of 1 and 0), it can be stored as binary data that would be approximately 600 bytes each. So my question is whether querying the 600 bytes stored in a db would be faster than querying a link followed by accessing the filesystem; assuming there are a lot of "image" queries being made. Does anyone have any experience with this area? If it matters, I am using MySQL, and MonetDB (separately, but have the same question for both). Many thanks, Brett

    Read the article

  • How to get MAX value of a version-number (varchar) column in T-SQL

    - by Ogre Psalm33
    I have a table defined like this: Column: Version Message Type: varchar(20) varchar(100) ---------------------------------- Row 1: 2.2.6 Message 1 Row 2: 2.2.7 Message 2 Row 3: 2.2.12 Message 3 Row 4: 2.3.9 Message 4 Row 5: 2.3.15 Message 5 I want to write a T-Sql query that will get message for the MAX version number, where the "Version" column represents a software version number. I.e., 2.2.12 is greater than 2.2.7, and 2.3.15 is greater than 2.3.9, etc. Unfortunately, I can't think of an easy way to do that without using CHARINDEX or some complicated other split-like logic. Running this query: SELECT MAX(Version) FROM my_table will yield the erroneous result: 2.3.9 When it should really be 2.3.15. Any bright ideas that don't get too complex?

    Read the article

  • C# WinForms & SQL bindings

    - by vent
    I have a MSSQL database with many tables and relations. I want to bind it to my C# GUI application with text- and combo-boxes. Which one is the best way to do it: (1) Everything done manually by own classes and methods: Import data to dataset or to many datatables separately, then create dictionaries to comboboxes, manually retrieve data to textboxes and create a sql statement if update action will be invoked, or, (2) Done automatically by VS: Connect my controls with BindingSource and BindingNavigator if needed. I know how to deal with the first one, but if the second way would be better, my question is — how to achieve it? I haven't dealt with automatically created databindings and I don't know how to tell to DataBinding property (Key|Value|Text) about relations and current row's ID. And what is the method to cancel/update changes in all GUI elements? I need a simple solution for this scenario. It's a quick-and-dirty academical project, which must only "work" and be as fast implemented as it can. Thanks

    Read the article

  • Microsoft SQL 2005 - using the modulo operator

    - by cc0
    So I have a silly problem, I have not used much MSSQL before, or any SQL for that matter. I basically have a minor mathematical problem that I need solved, and I thought modulo would be good. I have a number of dates in the database, but I need them be rounded off to the closest [dynamic integer] (could be anything from 0 to 5000000) which will be input as a parameter each time this query is called. So I thought I'd use modulo to find the remainder, then subtract that remainder from the date. If there is a better way, or an integrated function, please let me know! What would be the syntax for that? I've tried a lot of things, but I keep getting error messages like integers/floats/decimals can't be used with the modulo operators. I tried casting to all kinds of numeric datatypes. Any help would be appreciated.

    Read the article

  • Send bulk email from SQL Server 2008

    - by dbnewbie
    Hi, I am relatively new to databases,so please forgive me if my query sounds trivial. I need to send bulk email to all the contacts listed in a particular table. The body of the email is stored in a .doc file which begins as Dear Mr. _. The SQL server will read this file, append the last name of the contact in the blank space and send it to the corresponding email address for that last name. Any thoughts on how to proceed with this? Any insights,suggestions,tips will be greatly appreciated. Thanks!

    Read the article

  • Compound IDENTITY column in SQL SERVER 2008

    - by Asaf R
    An Orders table has a CustomerId column and an OrderId column. For certain reasons it's important that OrderId is no longer than 2-bytes. There will be several million orders in total, which makes 2-bytes not enough. A customer will have no more than several thousand orders making 2-bytes enough. The obvious solution is to have the (CustomerId, OrderId) be unique rather than (OrderId) itself. The problem is generating the next Customer's OrderId. Preferably, without creating a separate table for each customer (even if it contains only an IDENTITY column), in order to keep the upper layers of the solution simple. Q: how would you generate the next OrderId so that (CustomerId, OrderId) is unique but OrderId itself is allowed to have repetitions? Does Sql Server 2008 have a built in functionality for doing this? For lack of a better term I'm calling it a Compound IDENTITY column.

    Read the article

  • SQL Server Query solution cum Suggestion Required

    - by Nirmal
    Hello All... I have a following scenario in my SQL Server 2005 database. zipcodes table has following fields and value (just a sample): zipcode latitude longitude ------- -------- --------- 65201 123.456 456.789 65203 126.546 444.444 and place table has following fields and value : id name zip latitude longitude -- ---- --- -------- --------- 1 abc 65201 NULL NULL 2 def 65202 NULL NULL 3 ghi 65203 NULL NULL 4 jkl 65204 NULL NULL Now, my requirement is like I want to compare my zip codes of place table and update the available latitude and longitude fields from zipcode table. And there are some of the zipcodes which has no entry in zipcode table, so that should remain null. And the major issue is like I have more then 50,00,000 records in my db. So, query should support this feature. I have tried some of the solutions but unfortunately not getting proper output. Any help would be appreciated...

    Read the article

  • Accessing Active Directory Role Membership through LDAP using SQL Server 2005

    - by David Neale
    I would like to get a list of Active Directory users along with the security groups they are members of using SQL Server 2005 linked servers. I have the query working to retrieve records but I'm not sure how to access the memberOf attribute (it is a multi-value LDAP attribute). I have this temporary to store the information: DROP TABLE #ADUSERGROUPS CREATE TABLE #ADUSERGROUPS ( sAMAccountName varchar(30), UserGroup varchar(50) ) Each group/user association should be one row. This is my SELECT statement: SELECT sAMAccountName,memberOf FROM OpenQuery(ADSI, '<LDAP://hqdc04/DC=nt,DC=avs>; (&(objectClass=User)(sAMAccountName=9695)(sn=*)(mail=*)(userAccountControl=512)); sAMAccountName,memberOf;subtree') I get this error msg: OLE DB error trace [OLE/DB Provider 'ADSDSOObject' IRowset::GetData returned 0x40eda: Data status returned from the provider: [COLUMN_NAME=memberOf STATUS=DBSTATUS_E_CANTCONVERTVALUE], [COLUMN_NAME=sAMAccountName STATUS=DBSTATUS_S_OK]]. Msg 7346, Level 16, State 2, Line 2 Could not get the data of the row from the OLE DB provider 'ADSDSOObject'. Could not convert the data value due to reasons other than sign mismatch or overflow.

    Read the article

  • Return SQL Query as Array in Powershell

    - by Emo
    I have a SQL 2008 Ent server with the databases "DBOne", "DBTwo", "DBThree" on the server DEVSQLSRV. Here is my Powershell script: $DBNameList = (Invoke-SQLCmd -query "select Name from sysdatabases" -Server DEVSQLSRV) This produces my desired list of database names as: Name ----- DBOne DBTwo DBThree I has been my assumption that anything that is returned as a list is an Array in Powershell. However, when I then try this in Powershell: $DBNameList -contains 'DBTwo' It comes back has "False" instead of "True" which is leading me to believe that my list is not an actual array. Any idea what I'm missing here? Thanks so much! Emo

    Read the article

  • searching for address matches using sql server 2008 full text search

    - by Sridhar
    Hi, I am not sure how to search for address matches using sql server 2008 full text search. This is what I tried but it doesn't return any records. TableA ------ Address1 Address2 City State Zip All the above columns in the table are full text indexed. Let's say if the user enters "123 Apple street FL 33647" and I have a record in the table as Address1 = "123" , Address2 = "Apple street", City = "Tampa", State = "FL" and Zip = "33647" I would like the query to return this. can you please let me know how I would do this. query tried -------------- SELECT * FROM TableA WHERE CONTAINS((Address1, Address2, City, State, zip), N'FORMSOF(THESAURUS, 123AppleStreetFL33647)'); If I put spaces in the search word, it is giving syntax error. Thanks, sridhar.

    Read the article

  • Multiple aggregate functions in one SQL query from the same table using different conditions

    - by Eric Ness
    I'm working on creating a SQL query that will pull records from a table based on the value of two aggregate functions. These aggregate functions are pulling data from the same table, but with different filter conditions. The problem that I run into is that the results of the SUMs are much larger than if I only include one SUM function. I know that I can create this query using temp tables, but I'm just wondering if there is an elegant solution that requires only a single query. I've created a simplified version to demonstrate the issue. Here are the table structures: EMPLOYEE TABLE EMPID 1 2 3 ABSENCE TABLE EMPID DATE HOURS_ABSENT 1 6/1/2009 3 1 9/1/2009 1 2 3/1/2010 2 And here is the query: SELECT E.EMPID ,SUM(ATOTAL.HOURS_ABSENT) AS ABSENT_TOTAL ,SUM(AYEAR.HOURS_ABSENT) AS ABSENT_YEAR FROM EMPLOYEE E INNER JOIN ABSENCE ATOTAL ON ATOTAL.EMPID = E.EMPID INNER JOIN ABSENCE AYEAR ON AYEAR.EMPID = E.EMPID WHERE AYEAR.DATE > '1/1/2010' GROUP BY E.EMPID HAVING SUM(ATOTAL.HOURS_ABSENT) > 10 OR SUM(AYEAR.HOURS_ABSENT) > 3 Any insight would be greatly appreciated.

    Read the article

  • Autoincrementing hierarchical IDs on SQL Server

    - by Ville Koskinen
    Consider this table on SQL Server wordID aliasID value =========================== 0 0 'cat' 1 0 'dog' 2 0 'argh' 2 1 'ugh' WordID is a id of a word which possibly has aliases. AliasID determines a specific alias to a word. So above 'argh' and 'ugh' are aliases to each other. I'd like to be able to insert new words which do not have any aliases in the table without having to query the table for a free wordID value first. Inserting value 'hey' without specifying a wordID or an aliasID, a row looking like this would be created: wordID aliasID value =========================== 3 0 'hey' Is this possible and how?

    Read the article

  • SQL Server 2008 database mirroring madness

    - by Dmitri Nesteruk
    I'm trying to get database mirroring to work on SQL Server 2008 between two computers. I checked connectivity, but here's what I end up with: on the principal machine, the server can connect to the mirror but refuses to set up a mirroring partnership due to it being 'unable to connect' (I checked connectivity, everything works). The weird thing has happened on the mirror. First, the mirror now thinks it's being mirrored. Second, after I delete and recover the mirrored database, it goes into Restoring... mode and just gets stuck there. Any ideas you might have on this are appreciated. Thanks!

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >