Search Results

Search found 14739 results on 590 pages for 'mssql 2008'.

Page 106/590 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Why doesn’t Office-applications get focus when run from another application on Win 2008?

    - by JohanK
    I have some different COM-Interop examples that when run on Windows 2008 (Office 2007) always open minimized in the task bar. On Windows 2003 or XP they open like I want them to. Has there been any changes to how Windows deals with this? Or to Office? I know that I can close windows with CTRL-SHIFT-ALT, and by that get them to start maximized next time, but for some dialogs this doesn’t work. Is there any way to make them always open maximized or on top? Any clues that could point me in the right direction would be helpful. I have tried with both our VB6 app and a test app in C#.

    Read the article

  • How do i use SQL Server 2008? With Visual Studios?

    - by acidzombie24
    Its a two part question. How do i use SQL Server 2008? With Visual Studios? I started up a dummy project and with server explorer i tried with create new sql server database and add connection using my computer name (it came from a dropdown) as the server location. When i tried to create the database 'TestDB1' i got an error. I dont understand why. Its a fresh install and i have restarted the comp a few times since then. I havent messed with visual studios or the servers or even the control options to disable anything that would have been automatic. So whats with this? -edit- My goals are 1) create a database. 2) Be able to see all the database that exist on the server 3) execute sql queries in the ide 4) be able to browse tables. I dont need all of these but as many possible would be nice.

    Read the article

  • Connecting Visual Studio 2008 SP1 to TFS 2010

    - by Enrique Lima
    Introduction You have installed Team Foundation Server 2010, you are ready to go.  Your client is Visual Studio 2008 SP1, and need to connect to TFS 2010. Here is the story, the steps to configure Team Explorer are almost the same … meaning, you will open Visual Studio, then go to Team Explorer.  At that point you will Add an Existing Project, this where we connect to TFS.  Except, we get this: Now what?!?  We need to install the Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010.  Where to get it from? TFS 2010 installation media Microsoft’s Download Center Update Installation We arrive at the Welcome Screen for the Update, click Next Next comes the license screen, accept the license, by selecting the checkbox, then click next. The installation process will start at that point. Once it completes, click on Finish. Second Try Time to attempt to connect again. We are back to working with Team Explorer, and Adding an existing project.  There is a formula to be successful with this. protocol://servername:port/tfs/<name of collection> protocol = http or https servername = your tfs 2010 server port = 8080 by default, or the custom port you are using /tfs = I am assuming the default too /<name of collection = the name of the collection that was provisioned. Once the values are provided, click OK, then close. At this point you should see a listing of Projects available within the TFS 2010 collection. Select the project and click OK.  You will now see this listed in Team Explorer.

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Howto use Windows Authentication with SQL Server 2008 Express on a workgroup network?

    - by mbadawi23
    I have two computers running SQL Server 2008 Express: c01 and c02, I setup both for remote connection using windows authentication. Worked fine for c02 but not for c01. This is the error message I'm getting: TITLE: Connect to Server Cannot connect to ACAMP001\SQLEXPRESS. ADDITIONAL INFORMATION: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18452&LinkId=20476 BUTTONS: OK I don't know if I'm missing something, here is what I did: Enabled TCP/IP protocol for client from Sql Server Configuration Manager. Modified Windows firewall exceptions for respective ports. Started the Sql Browser service as a local service Added Windows user to this group: "SQLServerMSSQLUser$c01$SQLEXPRESS" From Management Studio, I added "SQLServerMSSQLUser$c01$SQLEXPRESS" to SQLEXPRESS instance's logins under security folder, and I granted sysadmin permissions to it. Restarted c01\SQLEXPRESS Restarted Sql Browser service. There is no domain here. It's only a workgroup. Please any help is appreciated, Thank you.

    Read the article

  • SQL Server 2008: Comparing similar records - Need to still display an ID for a record when the JOIN has no matches

    - by aleppke
    I'm writing a SQL Server 2008 report that will compare genetic test results for animals. A genetic test consists of an animalId, a gene and a result. Not all animals will have the same genes tested but I need to be able to display the results side-by-side for a given set of animals and only include the genes that are present for at least one of the selected animals. My TestResult table has the following data in it: animalId gene result 1 a CC 1 b CT 1 d TT 2 a CT 2 b CT 2 c TT 3 a CT 3 b TT 3 c CC 3 d CC 3 e TT I need to generate a result set that looks like the following. Note that Animal 3 is not being displayed (user doesn't want to see its results) and neither are results for Gene "e" since neither Animal 1 nor Animal 2 have a result for that gene: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b 1 NULL 2 TT c 1 TT 2 NULL d But I can only manage to get this: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b NULL NULL 2 TT c 1 TT NULL NULL d This is the query I'm using. SELECT sire.animalId AS 'SireID' ,sire.result AS 'SireResult' ,calf.animalId AS 'CalfID' ,calf.result AS 'CalfResult' ,sire.gene AS 'Gene' FROM (SELECT s.animalId ,s.result ,m1.gene FROM (SELECT [animalId ] ,result ,gene FROM TestResult WHERE animalId IN (1)) s FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m1 ON s.marker = m1.marker) sire FULL JOIN (SELECT c.animalId ,c.result ,m2.gene FROM (SELECT animalId ,result ,gene FROM TestResult WHERE animalId IN (2)) c FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m2 ON c.gene = m2.gene) calf ON sire.gene = calf.gene How do I get the SireIDs and CalfIDs to display their values when they don't have a record associated with a particular Gene? I was thinking of using COALESCE but I can't figure out how to specify the correct animalId to pass in. Any help would be appreciated.

    Read the article

  • Visual Studio 2008 project file does not load because of an unexpected encoding change.

    - by Xenan
    In our team we have a database project in visual Studio 2008 which is under source control by Team Foundation Server. Every two weeks or so, after one co-worker checks in, the project file won't load on the other developers machines. The error message is: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1. When I look at the project file in Notepad++, the file looks like this: ??<NUL?NULxNULmNULlNUL NULvNULeNULrNULsNULiNULoNULnNUL ... and so on (you can see <?xml version in this) whereas an normal project file looks like: <?xml version="1.0" encoding="utf-16"?> ... So probably something is wrong with the encoding of the file. This is a problem for us because it turns out to be impossible to get the file encoding correct again. The 'solution' is to throw away the project file an get the last know working version from source control. According to the file, the encoding should be UTF-16. According to Notepad++, the corrupted file is actually UTF-8. My questions are: Why is Visual Studio messing up the encoding of the project file, apparently at random times and at random machines? What should we do to prevent this? When it has happened, is there a possibility to restore the current file in the correct encoding instead of pulling an older version from source control? As a last note: the problem is with one single project file, all other project files don't expose this problem.

    Read the article

  • Create an SQL Express 2008 database in C# code, but login fails when trying to connect with a sysadm

    - by Andrés Gonzales
    I have a piece of code that creates an SQL Server Express 2008 in runtime, and then tries to connect to it to execute a database initialization script in Transact-SQL. The code that creates the database is the following: private void CreateDatabase() { using (var connection = new SqlConnection( "Data Source=.\\sqlexpress;Initial Catalog=master;" + "Integrated Security=true;User Instance=True;")) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = "CREATE DATABASE " + m_databaseFilename + " ON PRIMARY (NAME=" + m_databaseFilename + ", FILENAME='" + this.m_basePath + m_databaseFilename + ".mdf')"; command.ExecuteNonQuery(); } } } The database is created successfully. After that, I try to connect to the database to run the initialization script, by using the following code: private void ExecuteQueryFromFile(string filename) { string queryContent = File.ReadAllText(m_filePath + filename); this.m_connectionString = string.Format( @"Server=.\SQLExpress; Integrated Security=true;Initial Catalog={0};", m_databaseFilename); using (var connection = new SqlConnection(m_connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = queryContent; command.CommandTimeout = 0; command.ExecuteNonQuery(); } } } However, the connection.Open() statement fails, throwing the following exception: Cannot open database "TestData" requested by the login. The login failed. Login failed for user 'MYDOMAIN\myusername'. I am completely puzzled by this error because the account I am trying to connect with has sysadmin privileges, which should allow me to connect any database (notice that I use a connection to the master database to create the database in the first place).

    Read the article

  • SQL Server 2008 need just like crosstab query on XML column?

    - by user1332896
    <abc id="abc1"> <def id="def1"> <ghi att='ghi1'> <mn id="0742d2ea" name="RF" dt="0" df="3" ty="0" /> <mn id="64d9a11b" name="CJ" dt="0" df="3" ty="0" /> <mn id="db72d154" name="FJ" dt="2" df="4" ty="0" /> <mn id="39af9fa1" name="BS" dt="0" df="2" ty="0" /> </ghi> <jkl att='jkl1'> <mn id="0742d2ea" name="RF" dt="1" gl="19" /> <mn id="64d9a11b" name="CJ" dt="0" gl="6" /> <mn id="db72d154" name="FJ" dt="0" gl="0" /> <mn id="39af9fa1" name="BS" dt="0" gl="12" /> <mn id="ac4f566f" name="DJ" dt="0" gl="9" /> <mn id="4bf3ba2f" name="RP" dt="0" gl="16" /> <mn id="db1af021" name="SC" dt="1" gl="10" /> <mn id="c4c93a2d" name="DN" dt="1" gl="15" /> </jkl> </def> </abc> I need this output. Is this possible in SQL Server 2008? id name ghiDT ghiDF ghiTY jklDT jklGL 0742d2ea RF 0 3 0 1 19 64d9a11b CJ 0 3 0 0 6 db72d154 FJ 2 4 0 0 0 39af9fa1 BS 0 2 0 0 12 ac4f566f DJ 0 0 0 0 9 4bf3ba2f RP 0 0 0 0 16 db1af021 SC 0 0 0 1 10 c4c93a2d DN 0 0 0 1 15

    Read the article

  • How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio

    - by adamjford
    In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.) This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters. I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this: select Convert(xml, Details) from Events where EventID = 13920 (Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.) However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error: Unable to show XML. The following error happened: Unexpected end of file has occurred. Line 5, position 220160. One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options. So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • Why do I get a security warning in visual studio 2008 when creating a project?

    - by MikeG
    This is the error, it's basically a security warning (And here's the text grabbed off the dialog box) Security Warning for WindowsApplication4 __________________________I The WindowsApplication4 project file has been customized and could present a security risk by executing custom build steps when opened in Microsoft Visual Studio. If this project came from an untrustwoithy source, it could cause damage to your computer or compromise your private information. More Details Project load options 0 Load project for browsing Opens the project in Microsoft Visual Studio with increased security. This option allows you to browse the contents of the project, but some functionality, such as IntelliSense, is restricted, When a project is loaded for browsing, actions such as building, cleaning, publishing, or opening designers could still remain unsafe. Load project normally Opens the project normally in Microsoft Visual Studio. Use this option if you trust the source and understand the potential risks involved. Microsoft Visual Studio does not restrict any project functionality and will not prompt you again for this project. Ask me for every project in this solution OK L Cancel When click the more details button get this: Microsoft Visual Studio __ An item referring to the file was found in the project file “C:\Users\mgriffiths\Documents\Visual Studio 2008\ProjectATemp\Win dowsApplication4\WindowsApplicdtion4\W in dowsApplication4.vbproj”. Since this file is located within a system directory, root directory, or network share, it could be harmful to write to this file. OK

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • How to close and open access to SQL Server 2008 in Windows application?

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • Is there a free tool which can help visualize the logic of a stored procedure in SQL Server 2008 R2?

    - by Hamish Grubijan
    I would like to be able to plot a call graph of a stored procedure. I am not interested in every detail, and I am not concerned with dynamic SQL (although it would be cool to detect it and skip it maybe or mark it as such.) I would like the tool to generate a tree for me, given the server name, db name, stored proc name, a "call tree", which includes: Parent stored procedure. Every other stored procedure that is being called as a child of the caller. Every table that is being modified (updated or deleted from) as a child of the stored proc which does it. Hopefully it is clear what I am after; if not - please do ask. If there is not a tool that can do this, then I would like to try to write one myself. Python 2.6 is my language of choice, and I would like to use standard libraries as much as possible. Any suggestions? EDIT: For the purposes of bounty Warning: SQL syntax is COMPLEX. I need something that can parse all kinds of SQL 2008, even if it looks stupid. No corner cases barred :) EDIT2: I would be OK if all I am missing is graphics.

    Read the article

  • Fun with upgrading and BCP

    - by DavidWimbush
    I just had trouble with using BCP out via xp_cmdshell. Probably serves me right but that's a different issue. I got a strange error message 'Unable to resolve column level collations' which turned out to be a bit misleading. I wasted some time comparing the collations of the the server, the database and all the columns in the query. I got so desperate that I even read the Books Online article. Still no joy but then I tried the interweb. It turns out that calling bcp without qualifying it with a path causes Windows to search the folders listed in the Path environment variable - in that order - and execute the first version of BCP it can find. But when you do an in-place version upgrade, the new paths are added on the end of the Path variable so you don't get the latest version of BCP by default. To check which version you're getting execute bcp -v at the command line. The version number will correspond to SQL Server version numbering (eg. 10.50.n = 2008 R2). To examine and/or edit the Path variable, right-click on My Computer, select Properties, go to the Advanced tab and click on the Environment Variables button. If you change the variable you'll have to restart the SQL Server service before it takes effect.

    Read the article

  • Installing Team Foundation Server

    - by vzczc
    What are the best practices in setting up a new instance of TFS 2008 Workgroup edition? Specifically, the constraints are as follows: Must install on an existing Windows Server 2008 64 bit TFS application layer is 32 bit only Should I install SQL Server 2008, Sharepoint and the app layer in a virtual instance of Windows Server 2008 or 2003(I am already running Hyper-V) or split the layers with a database on the host OS and the app layer in a virtual machine? Edit: Apparently, splitting the layers is not recommended

    Read the article

  • Minimum needs to Deploy SQL Server Integration Services 2008

    - by Tim
    Hi, I would like to run SSIS 2008 packages on a server that does not have SQL Server 2008 installed on it. I have a simple package to test the concept, but it fails to execute. The return code is 9020 which I have not seen listed as a return code elsewhere. I have copied the following files to the test server that does not have SQL Server 2008 installed on it: SelfContainedSample.dtsConfig Package.dtsx DTExec.exe I am attempting to run the package using a batch file. The line in the batch file that runs the package is: "%dtexecloc%\dtexec.exe" /FILE "%packagefolder%\Package.dtsx" /CONFIGFILE "%configfolder\SelfContainedSample.dtsConfig" /CHECKPOINTING OFF /REPORTING E %logfile% set rc=%errorlevel% I am wondering if there are other requirements that need to be satified to run an SSIS 2008 package on a server that does not have SQL Server 2008 on it? .NET Runtime? SSIS 2008 runtime? Please share your advice if you have a solution or have met this issue before. Thanks, Tim

    Read the article

  • Repeat Customers Each Year (Retention)

    - by spazzie
    I've been working on this and I don't think I'm doing it right. |D Our database doesn't keep track of how many customers we retain so we looked for an alternate method. It's outlined in this article. It suggests you have this table to fill in: Year Number of Customers Number of customers Retained in 2009 Percent (%) Retained in 2009 Number of customers Retained in 2010 Percent (%) Retained in 2010 .... 2008 2009 2010 2011 2012 Total The table would go out to 2012 in the headers. I'm just saving space. It tells you to find the total number of customers you had in your starting year. To do this, I used this query since our starting year is 2008: select YEAR(OrderDate) as 'Year', COUNT(distinct(billemail)) as Customers from dbo.tblOrder where OrderDate >= '2008-01-01' and OrderDate <= '2008-12-31' group by YEAR(OrderDate) At the moment we just differentiate our customers by email address. Then you have to search for the same names of customers who purchased again in later years (ours are 2009, 10, 11, and 12). I came up with this. It should find people who purchased in both 2008 and 2009. SELECT YEAR(OrderDate) as 'Year',COUNT(distinct(billemail)) as Customers FROM dbo.tblOrder o with (nolock) WHERE o.BillEmail IN (SELECT DISTINCT o1.BillEmail FROM dbo.tblOrder o1 with (nolock) WHERE o1.OrderDate BETWEEN '2008-1-1' AND '2009-1-1') AND o.BillEmail IN (SELECT DISTINCT o2.BillEmail FROM dbo.tblOrder o2 with (nolock) WHERE o2.OrderDate BETWEEN '2009-1-1' AND '2010-1-1') --AND o.OrderDate BETWEEN '2008-1-1' AND '2013-1-1' AND o.BillEmail NOT LIKE '%@halloweencostumes.com' AND o.BillEmail NOT LIKE '' GROUP BY YEAR(OrderDate) So I'm just finding the customers who purchased in both those years. And then I'm doing an independent query to find those who purchased in 2008 and 2010, then 08 and 11, and then 08 and 12. This one finds 2008 and 2010 purchasers: SELECT YEAR(OrderDate) as 'Year',COUNT(distinct(billemail)) as Customers FROM dbo.tblOrder o with (nolock) WHERE o.BillEmail IN (SELECT DISTINCT o1.BillEmail FROM dbo.tblOrder o1 with (nolock) WHERE o1.OrderDate BETWEEN '2008-1-1' AND '2009-1-1') AND o.BillEmail IN (SELECT DISTINCT o2.BillEmail FROM dbo.tblOrder o2 with (nolock) WHERE o2.OrderDate BETWEEN '2010-1-1' AND '2011-1-1') --AND o.OrderDate BETWEEN '2008-1-1' AND '2013-1-1' AND o.BillEmail NOT LIKE '%@halloweencostumes.com' AND o.BillEmail NOT LIKE '' GROUP BY YEAR(OrderDate) So you see I have a different query for each year comparison. They're all unrelated. So in the end I'm just finding people who bought in 2008 and 2009, and then a potentially different group that bought in 2008 and 2010, and so on. For this to be accurate, do I have to use the same grouping of 2008 buyers each time? So they bought in 2009 and 2010 and 2011, and 2012? This is where I'm worried and not sure how to proceed or even find such data. Any advice would be appreciated! Thanks!

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • QNTC and Windows Server 2008 R2

    - by Ben
    I am having a really hard time getting an iSeries (AS/400) machine talking to my new Windows Server 2008 R2 box using the QNTC file system on the iSeries. I had similar problems getting it to initially talk to a Windows Server 2003 machine, but enabling the local Guest account on the 2003 box solved that one. No such luck with the new 2008 box. When I do a WRKLNK /QNTC/SVR01 on the iSeries (which should show share listings, and does on any 2003 boxes) all I get is (Cannot find object to match specified name.). I know the iSeries likes the same username and password on the remote server, but unfortunately for us this is not the case. Anyhow, it does currently work with different username/password combinations on a 2003 box. To try and get the wretched things talking, I have made the 2008 server pretty open but the iSeries will not see shares on it. I have enabled the local Guest account, turned Windows firewall off, set the share permissions so Everyone has full control but to no avail. I read something on the internet about the iSeries only being able to handle NTLM authentication (and I understand by default that Server 2008 R2 only uses NTLMv2 and has NTLM disabled), so I made a special group policy for the server and tweaked all Group Policy settings under Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options but the iSeries STILL won't see it. We have a team of programmers who do all the system administration of the iSeries, but they are stumped for ideas on their side, and I'm stumped for ideas on my side. This is driving me crazy now, and if anybody has managed to get an iSeries to talk to Windows Server 2008 R2 using QNTC I would be very appreciative of any suggestions, be it on the Windows side, iSeries settings or even IBM PTF's that might patch anything. The iSeries is running V5R4 and I have *SECOFR privileges on it, if it helps. One final (most important!) note - The programmers think it's my system being tricky, and I think it's theirs - please prove me right :)

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >