Search Results

Search found 14911 results on 597 pages for '2008 r2'.

Page 10/597 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • SQL Server 2008: If Multiple Values Set In Other Mutliple Values Set

    - by AJH
    In SQL, is there anyway to accomplish something like this? This is based off a report built in SQL Server Report Builder, where the user can specify multiple text values as a single report parameter. The query for the report grabs all of the values the user selected and stores them in a single variable. I need a way for the query to return only records that have associations to EVERY value the user specified. -- Assume there's a table of Elements with thousands of entries. -- Now we declare a list of properties for those Elements to be associated with. create table #masterTable ( ElementId int, Text varchar(10) ) insert into #masterTable (ElementId, Text) values (1, 'Red'); insert into #masterTable (ElementId, Text) values (1, 'Coarse'); insert into #masterTable (ElementId, Text) values (1, 'Dense'); insert into #masterTable (ElementId, Text) values (2, 'Red'); insert into #masterTable (ElementId, Text) values (2, 'Smooth'); insert into #masterTable (ElementId, Text) values (2, 'Hollow'); -- Element 1 is Red, Coarse, and Dense. Element 2 is Red, Smooth, and Hollow. -- The real table is actually much much larger than this; this is just an example. -- This is me trying to replicate how SQL Server Report Builder treats -- report parameters in its queries. The user selects one, some, all, -- or no properties from a list. The written query treats the user's -- selections as a single variable called @Properties. -- Example scenario 1: User only wants to see Elements that are BOTH Red and Dense. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Dense in (select Text from #masterTable where ElementId = e.Id) --ideally a set containing only Red, Coarse, and Dense --Both Red and Dense are within Element 1's properties (Red, Coarse, Dense), so Element 1 gets returned, but not Element 2. -- Example scenario 2: User only wants to see Elements that are BOTH Red and Hollow. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Hollow in (select Text from #masterTable where ElementId = e.Id) --Both Red and Hollow are within Element 2's properties (Red, Smooth, Hollow), so Element 2 gets returned, but not Element 1. --Example Scenario 3: User only picked the Red option. select e.* from Elements e where (@Properties) --ideally a set containing only Red in (select Text from #masterTable where ElementId = e.Id) --Red is within both Element 1 and Element 2's properties, so both Element 1 and Element 2 get returned. The above syntax doesn't actually work because SQL doesn't seem to allow multiple values on the left side of the "in" comparison. Error that returns: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. Am I even on the right track here? Sorry if the example looks long-winded or confusing.

    Read the article

  • Assigning Static Public IP Address to Windows Server 2008

    - by Neeti
    Please help a newbie. I am new to windows server. I have an IBM server and I have installed Windows Server 2008 R2 on that. I am provided with a static IP address by my ISP. How I can assign that to my server? I have a webapplication hosted on the server which I require to access from the external world using internet browser. How can this be achieved? Please let me know if there are any tutorials or step by step guide for achieving what I am trying to.

    Read the article

  • New T-SQL Functionality in SQL Server 2008

    - by ejohnson2010
    In my most recent posts I have looked at a few of the new features offered in T-SQL in SQL Server 2008. In this post, I want to take a closer look at some of the smaller additions, but additions that are likely to pack a big punch in terms of efficiency. First let’s talk a little about compound operators. This is a concept that has been around in programming languages for a long time, but has just now found its way into T-SQL. For example, the += operator will add the values to the current variable...(read more)

    Read the article

  • How to enable ping in windows firewall in windows server 2008 r2

    - by ybbest
    If you are unable ping your windows server 2008 r2 machine or if you have a “one way ping problem”. You need to check whether you have it enabled in your windows firewall.To enable it , you need to do the following: 1. You need to go to control panel >> windows firewall >> Advanced settings 2. Go to Inbound Rules and enable File and Printer Sharing (Echo Request – ICMPv4-In),after you have done this ,your computer will become pingable.

    Read the article

  • How to enable ping in windows firewall in windows server 2008 r2

    - by ybbest
    If you are unable ping your windows server 2008 r2 machine or if you have a “one way ping problem”. You need to check whether you have it enabled in your windows firewall.To enable it , you need to do the following: 1. You need to go to control panel >> windows firewall >> Advanced settings 2. Go to Inbound Rules and enable File and Printer Sharing (Echo Request – ICMPv4-In),after you have done this ,your computer will become pingable.

    Read the article

  • Sql 2008 r2 DEV install failed

    - by obulay
    I am trying to install Sql 2008 r2 DEVeloper on Windows 2008 STD. It previously had Sql 2008 Enterprise installed. I first uninstalled SQL 2008. I think uninstall still leaves some crap in registery, and in Program files. Installation fails in the last step of the process - installation, about couple of minutes into it. Can't insert picture for you because serverfault does not allow me to do so. But it basically it: Error message: "Installation failed" I re-installed SQL 2008 Enterprise on this box and we are not going to go to R2 on older servers that previously had SQL 2005 or SQL 2008 on it. Looking at Windows log: Activation context generation failed for "C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\SQLServer2008R2\x64\Microsoft.SqlServer.Configuration.SqlServer_ConfigExtension.dll". Dependent Assembly Microsoft.VC80.CRT,processorArchitecture="amd64",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="8.0.50727.4027" could not be found. Please use sxstrace.exe for detailed diagnosis.

    Read the article

  • Windows 2008 R2 DS in 2003 domain?

    - by 3molo
    Hi, Having a master domain controller running Windows 2003, and now I'd like to set up a new domain controller on a branch office. I really only have access to licenses for Windows 2008 R2 (through licensing.microsoft.com), so question is if a newly installed Windows 2008 Standard R2 can become a domain controller in the existing (2003) domain? First I tried adprep /forestprep on the newly installed 2008, but it complained about not being a domain controller. I then tried dcpromo, but it too complained about it. According to MS documentation, it seems I have to run 'adprep /forestprep' on the master domain controller, and adprep is located in 2008 installation DVD. Am I on the right track? Is the correct way to mount the 2008 installation dvd into the existing 2003 master controller, and run 'adprep /forestprep' on it there? Will I be able to run dcpromo on the 2008 once that has been completed? Thanks,

    Read the article

  • I have a server running Windows 2008 R2 Core and it needs to hosts either SVN or GIT

    - by Jason Adams
    The server allocated for our cross platform projects (both Mac & PC) source repository is running Win2008R2 Core. We're really happy with its stability and we aren't interested in moving over to non-core. We need to get either SVN or GIT installed on the aforementioned box in the shortest amount of steps. We know the advantages/disadvantages of both systems. That being said, we don't care which one we use, we're just are looking for the path of least resistance on setting up a repository on a machine running R2 core.

    Read the article

  • Upgrade to 2008 R2

    - by DavidWimbush
    I don't like it, Carruthers. It's just too quiet. Well, I've done the pre-production server, the main live server and the Reporting/BI server with remarkably little trouble. Pre-production and live were rebuilds. I failed live over to our log shipping standby for the duration, which has a gotcha I blogged about before. When I failed back to the primary live server again, it was very quick to bring the databases online. I understand the databases don't actually get upgraded until you recover them but there was no noticable delay. It's gone from 2005 Workgroup - limited to 4GB of memory - to 2008 R2 Standard so it can now use nearly all of the 30GB in the server. It's soo much faster. The reporting/BI server I upgraded in situ. This took a while but, again, went smoothly. Just watch out, because the master database was left at compatibility level 90. Also the upgrade decided to use the reporting service's credentials for database access when running reports. It didn't preserve the existing credentials and I had to go into the Reporting Configuration Manager to put them back in. Make sure you know what credentials your server is using before you upgrade. All things considered, a fairly painless experience. Now I just have to upgrade and reset our log shipping standby server again!

    Read the article

  • Overwhelmed by complex C#/ASP.NET project in Visual Studio 2008

    - by Darren Cook
    I have been hired as a junior programmer to work on projects that extend existing functionality in a very large, complex solution. The code base consists of C#, ASP.NET, jQuery, javascript, html and xml. I have some knowledge of all these in addition to fair knowledge of object-oriented programming and its fundamental concepts of inheritance, abstraction, polymorphism and encapsulation. I can follow code up through its base classes, interfaces, abstract classes and understand a large part of the code that I read while doing this. However, this solution is so humongous and so many things get tied together whenever I navigate through the code that I feel absolutely overwhelmed. I often find myself unable to fully follow everything that is going on with objects being serialized, large amounts of C# and javascript operating on the same pages and methods being called from template files that consist mainly of markup. I love learning about code, but trying to deal with this really stresses me out. Additionally, I do know that a significant amount of unit testing has been done but I know nothing about unit testing or how to utilize it. Any advice anyone could offer me regarding dealing with a large code base while using Visual Studio 2008 would be greatly appreciated. Are there tools that I can use to help get a handle on what is going on? Perhaps there are things even in Visual Studio that I am not aware of. How can I follow the code to low level functionality in order to get a better grasp of what is going on at a high level?

    Read the article

  • Tip#102: Did you know… How to specify tag specific formatting

    - by The Official Microsoft IIS Site
    Let’s see this with an example.  I have the following html code on my page. Now if I format the document by selecting Edit –> Format document (or Ctrl K, Ctrl D) The document becomes I want the content inside td should remain on the same line after formatting the document. Following steps would show how you can specify tag specific formatting for the Visual Studio editor Right click on the editor in an aspx file and select Formatting and Validation... (or alternatively you can go from Menu...(read more)

    Read the article

  • MS12-070 : Security Updates for all supported versions of SQL Server

    - by AaronBertrand
    This week there was a security release for all supported versions of SQL Server . Each version has 32-bit and 64-bit patches, and each version has GDR (General Distribution Release) and QFE (Quick-Fix Engineering) patches. GDR should be applied if you are at the base (RTM or SP) build for your version, while QFE should be applied if you have installed any cumulative updates after the RTM or SP build. ( More details here .) SQL Server 2005 RTM, SP1, SP2, SP3 - not supported SP4 - GDR = 9.00.5069,...(read more)

    Read the article

  • An XEvent a Day (10 of 31) – Targets Week – etw_classic_sync_target

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week – pair_matching , looked at the pair_matching Target in Extended Events and how it could be used to find unmatched Events.  Today’s post will cover the etw_classic_sync_target Target, which can be used to track Events starting in SQL Server, out to the Windows Server OS Kernel, and then back to the Event completion in SQL Server. What is the etw_classic_sync_target Target? The etw_classic_sync_target Target is the target that hooks Extended Events in SQL Server...(read more)

    Read the article

  • An XEvent a Day (11 of 31) – Targets Week – Using Multiple Targets to Debug Orphaned Transactions

    - by Jonathan Kehayias
    Yesterday’s blog post Targets Week – etw_classic_sync_target covered the ETW integration that is built into Extended Events and how the etw_classic_sync_target can be used in conjunction with other ETW traces to provide troubleshooting at a level previously not possible with SQL Server. In today’s post we’ll look at how to use multiple targets to simplify analysis of Event collection. Why Multiple Targets? You might ask why you would want to use multiple Targets in an Event Session with Extended...(read more)

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • An XEvent a Day (18 of 31) – A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 2)

    - by Jonathan Kehayias
    In yesterday’s blog post A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 1) , we looked at what happens when we Backup a database in SQL Server.  Today, we are going to use the information we captured to perform some analysis of the Backup information in an attempt to find ways to decrease the time it takes to backup a database.  When I began reviewing the data from the Backup in yesterdays post, I realized that I had made a mistake in the process and left...(read more)

    Read the article

  • An XEvent a Day (3 of 31) – Managing Event Sessions

    - by Jonathan Kehayias
    Yesterdays post, Querying the Extended Events Metadata , showed how to discover the objects available for use in Extended Events.  In todays post, we’ll take a look at the DDL Commands that are used to create and manage Event Sessions based on the objects available in the system.  Like other objects inside of SQL Server, there are three DDL commands that are used with Extended Events; CREATE EVENT SESSION , ALTER EVENT SESSION , and DROP EVENT SESSION .  The command names are self...(read more)

    Read the article

  • An XEvent a Day (13 of 31) – The system_health Session

    - by Jonathan Kehayias
    Today’s post was originally planned for this coming weekend, but seems I’ve caught whatever bug my kids had over the weekend so I am changing up today’s blog post with one that is easier to cover and shorter.  If you’ve been running some of the queries from the posts in this series, you have no doubt come across an Event Session running on your server with the name of system_health.  In today’s post I’ll go over this session and provide links to references related to it. When Extended Events...(read more)

    Read the article

  • An XEvent a Day (14 of 31) – A Closer Look at Predicates

    - by Jonathan Kehayias
    When working with SQL Trace, one of my biggest frustrations has been the limitations that exist in filtering.  Using sp_trace_setfilter to establish the filter criteria is a non-trivial task, and it falls short of being able to deliver complex filtering that is sometimes needed to simplify analysis.  Filtering of trace data was performed globally and applied to the trace affecting all of the events being collected.  Extended Events introduces a much better system of filtering using...(read more)

    Read the article

  • An XEvent a Day (15 of 31) – Tracking Ghost Cleanup

    - by Jonathan Kehayias
    If you don’t know anything about Ghost Cleanup, I recommend highly that you go read Paul Randal’s blog posts Inside the Storage Engine: Ghost cleanup in depth , Ghost cleanup redux , and Turning off the ghost cleanup task for a performance gain .  To my knowledge Paul’s posts are the only things that cover Ghost Cleanup at any level online. In this post we’ll look at how you can use Extended Events to track the activity of Ghost Cleanup inside of your SQL Server.  To do this, we’ll first...(read more)

    Read the article

  • An XEvent a Day (5 of 31) - Targets Week – ring_buffer

    - by Jonathan Kehayias
    Yesterday’s post, Querying the Session Definition and Active Session DMV’s , showed how to find information about the Event Sessions that exist inside a SQL Server and how to find information about the Active Event Sessions that are running inside a SQL Server using the Session Definition and Active Session DMV’s.  With the background information now out of the way, and since this post falls on the start of a new week I’ve decided to make this Targets Week, where each day we’ll look at a different...(read more)

    Read the article

  • An XEvent a Day (17 of 31) – A Look at Backup Internals and How to Track Backup and Restore Throughput (Part 1)

    - by Jonathan Kehayias
    Today’s post is a continuation of yesterday’s post How Many Checkpoints are Issued During a Full Backup? and the investigation of Database Engine Internals with Extended Events.  In today’s post we’ll look at how Backup’s work inside of SQL Server and how to track the throughput of Backup and Restore operations.  This post is not going to cover Backups in SQL Server as a topic; if that is what you are looking for see Paul Randal’s TechNet Article Understanding SQL Server Backups . Yesterday...(read more)

    Read the article

  • Trace Flag 610 – When should you use it?

    - by simonsabin
    Thanks to Marcel van der Holst for providing this great information on the use of Trace Flag 610. This trace flag can be used to have minimal logging into a b tree (i.e. clustered table or an index on a heap) that already has data. It is a trace flag because in testing they found some scenarios where it didn’t perform as well. Marcel explains why below. “ TF610 can be used to get minimal logging in a non-empty B-Tree. The idea is that when you insert a large amount of data, you don't want to...(read more)

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >