Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 155/1132 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • SQL Select Upcoming Birthdays

    - by Crob
    I'm trying to write a stored procedure to select employees who have birthdays that are upcoming. SELECT * FROM Employees WHERE Birthday > @Today AND Birthday < @Today + @NumDays This will not work because the birth year is part of Birthday, so if my birthday was '09-18-1983' that will not fall between '09-18-2008' and '09-25-2008'. Is there a way to ignore the year portion of date fields and just compare month/days? This will be run every monday morning to alert managers of birthdays upcoming, so it possibly will span new years. Here is the working solution that I ended up creating, thanks Kogus. SELECT * FROM Employees WHERE Cast(DATEDIFF(dd, birthdt, getDate()) / 365.25 as int) - Cast(DATEDIFF(dd, birthdt, futureDate) / 365.25 as int) <> 0

    Read the article

  • What scenarios/settings will result in a query on SQL Server (2008) return stale data

    - by s1mm0t
    Most applications rarely need to display 100% accurate data. For example if this stack overflow question displays that there have been 0 views, when there have really been 10, it doesn't really matter. This is one way that the (perceived) performance of applications can be improved, by caching results and therefore sometimes not showing 100% accurate results. There are some cases where the data does need to be 100% accurate though. So if I run the query select * from Foo I want to be sure that the results are not stale. Now depending on how my database is set up, other activity on the database, use of transactions and isolation levels etc this query may or may not be a true reflection of the world. What scenario's and settings can people think of that will result in this query returning stale results or given that another connection is part way through a transaction that has updated this table, how can I guarantee that when the above query returns, the results will be accurate.

    Read the article

  • Basic SQL Query, I am newbie

    - by user3530547
    I just started my database and query class on Monday. We met on Monday and just went over the syllabus, and on Wednesday the network at school was down so we couldn't even do the power point lecture. Right now I am working on my first homework assignment and I am almost finished but I am having trouble on one question. Here is is... Write a SELECT statement that returns one column from the Customers table named FullName that joins the LastName and FirstName columns. Format the columns with the last name, a comma, a space, and the first name like this: Doe, John Sort the result set by last name in ascending sequence. Return only the contacts whose last name begins with letters from M to Z. Here is what I have so far... USE md0577283 SELECT FirstName,LastName FROM Customers ORDER BY LastName,FirstName My question is how do I format is Lastname, FirstName like the professor wants and how do I only select names M-Z? If someone could point me in the right direction I would greatly appreciate it. Thank you. PS With all do respect, I didn't ask for the answer I asked for a nudge in the right direction so why the down vote guys?

    Read the article

  • Creating a SQL lookup

    - by Scott
    I’m in the process of cleaning up a database table. Due to the way some of the data needed to be processed, now I need to go back and perform a “reverse lookup” on the data. For example, a field for one of the records is set to “car” and I need to set that record’s tranportmode field to “1” (for “car”). The lookup tables are already created. I just need to do the reverse lookup part. The cleansed tables will only have the numeric lookup value.

    Read the article

  • SQL Server dilemma, performance

    - by Woland
    Hello I am creating app where user can save options witch one is better? to save into user table varchar feeld smthing like ('1,23,4354,34,3') query for this is select * from data where CHARINDEX ( 'L', Providers , 0 ) 0 create other table where user options are and just add rows select * from data where Providers in (select Providers from userdata where userid=100) thanks for help

    Read the article

  • UPDATE statement wrapped in an IF EXISTS block

    - by formica
    I'm trying to write a DML script that updates a column but I wanted to make sure the column existed first so I wrapped it in a IF EXISTS block IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='Client' AND COLUMN_NAME='IsClarityEnabled') BEGIN UPDATE Client SET IsClarityEnabled = 1 WHERE ClientID = 21 END So the weirdness is that it tries to execute the update even if it fails the condition. So column doesn't exist and the UPDATE statement runs and I get an error. Why? Even stranger is that this does work: IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='Client' AND COLUMN_NAME='IsClarityEnabled') BEGIN EXEC('UPDATE Client SET IsClarityEnabled = 1 WHERE ClientID = 21') END Is there something special about an UPDATE command that causes it to behave this way?

    Read the article

  • SQL Server Query with "ELSE:"

    - by Mike D
    I have various VB6 projects I'm maintaining with some of the queries being passed to the server having "ELSE:" with the colon used in case statements. I'm wondering can someone tell me what the **ll the colon is used for? It causes errors in SQL2005 and greater, but SQL2000 works with no complaints. I'd like to just remove it from the code & re-compile, but I'm afraid it'll break 10 other things in the application.. Thanks in advance...

    Read the article

  • Select a record with highest amount by joining two tables

    - by user2516394
    I've 2 tables Sales & Purchase, Sales table with fields SaleId, Rate, Quantity, Date, CompanyId, UserID. Purchase table with fields PurchaseId, Rate, Quantity, Date, CompanyId, UserID. I want to select a record from either table that have highest Rate*Quantity. SELECT SalesId Or PurchaseId FROM Sales,Purchase where Sales.UserId=Purchase.UserId and Sales.CompanyId=Purchase.CompanyId AND Sales.Date=Current date AND Purchase.Date=Current date AND Sales.UserId=1 AND Purchase.UserId=1 AND Sales.CompanyId=1 AND Purchase.ComoanyId=1

    Read the article

  • MS SQL: How to get the newest date in a table with several equal keys

    - by Qohelet
    Unfortunately my knowledge related to statements like "group by" and "having" is quite limited, so hopefully you can help me: I have a view -here's an excerpt- (if we have some Europeans here - it's v021 of Winline/Mesonic): ID | Artikelbezeichnung1 | Bez2 | mesoyear _____________________________________________________________________ 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1344 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1356 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1356 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1344 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1356 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1356 2401CR13 | Crista,Mahon rojo,1S,33,3/33,3 | Cristacer | 1332 2401CR13 | Crista,Mahon rojo,1S,33,3/33,3 | Cristacer | 1344 So the ID is not unique and I just need the one with the highest val in "mesoyear". My fist solution was: Select c015 as ID, c003 as Artikelbezeichnung1, c074 as Bez2, mesoyear from CWLDATEN_91.dbo.v021 group by c015 having mesoyear = max(mesoyear) But this doesn't work at all... Msg 8121, Level 16, State 1, Line 8 Column 'CWLDATEN_91.dbo.v021.mesoyear' is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. So I just removed the "having" statement and it went "better": Msg 8120, Level 16, State 1, Line 2 Column 'CWLDATEN_91.dbo.v021.c003' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. So I tried to remove the error just by adding things to the "group by". And it worked. Select c015 as ID, c003 as Artikelbezeichnung1, c074 as Bez2, max(mesoyear) from CWLDATEN_91.dbo.v021 group by c015,c003,c074 gives me exactly what I want. But the correct Select contains about 24 columns and some calculations as well. The problem can't be solved just by adding all the columns to the "group by"...? Can someone please help me to find a proper command? Thank you!

    Read the article

  • how do i insert into two table all at once in a stored procedure?

    - by user996502
    Doing a project for school so any help would be great thank you! I have two tables how do i insert into two tables? so both tables are linked. First table called Customer with primary key called CID that auto increments CREATE TABLE [dbo].[Customer]( [CID] [int] IDENTITY(1,1) NOT NULL, [LastName] [varchar](255) NOT NULL, [FirstName] [varchar](255) NOT NULL, [MiddleName] [varchar](255) NULL, [EmailAddress] [varchar](255) NOT NULL, [PhoneNumber] [varchar](12) NOT NULL CONSTRAINT [PK__CInforma__C1F8DC5968DD69DC] PRIMARY KEY CLUSTERED ( And a second table called Employment that has a foreign key linked to the parent table CREATE TABLE [dbo].[Employment]( [EID] [int] IDENTITY(1,1) NOT NULL, [CID] [int] NOT NULL, [Employer] [varchar](255) NOT NULL, [Occupation] [varchar](255) NOT NULL, [Income] [varchar](25) NOT NULL, [WPhone] [varchar](12) NOT NULL, CONSTRAINT [PK__Employme__C190170BC7827524] PRIMARY KEY CLUSTERED (

    Read the article

  • Field to display Previous 30 Day Total

    - by whytheq
    I've got this table: CREATE TABLE #Data1 ( [Market] VARCHAR(100) NOT NULL, [Operator] VARCHAR(100) NOT NULL, [Date] DATETIME NOT NULL, [Measure] VARCHAR(100) NOT NULL, [Amount] NUMERIC(36,10) NOT NULL, --new calculated fields [DailyAvg_30days] NUMERIC(38,6) NULL DEFAULT 0 ) I've populated all the fields apart from DailyAvg_30days. This field needs to show the total for the preceding 30 days e.g. 1. if Date for a particular record is 2nd Dec then it will be the total for the period 3rd Nov - 2nd Dec inclusive. 2. if Date for a particular record is 1st Dec then it will be the total for the period 2nd Nov - 1st Dec inclusive. My attempt to try to find these totals before updating the table is as follows: SELECT a.[Market], a.[Operator], a.[Date], a.[Measure], a.[Amount], [DailyAvg_30days] = SUM(b.[Amount]) FROM #Data1 a INNER JOIN #Data1 b ON a.[Market] = b.[Market] AND a.[Operator] = b.[Operator] AND a.[Measure] = b.[Measure] AND a.[Date] >= b.[Date]-30 AND a.[Date] <= b.[Date] GROUP BY a.[Market], a.[Operator], a.[Date], a.[Measure], a.[Amount] ORDER BY 1,2,4,3 Is this a valid approach or do I need to approach this from a different angle?

    Read the article

  • strange behavior of <> to filter null values

    - by Kerezo
    Hi experts: I have a table Called tblAlarm and it has some records like this: I have another table for determine what user see what message: Now I want to write a query to show Messages that user has not seen if message didinot expired.(for example it's year between BeginYear and EndYear and so on ...). I write this query: SELECT * FROM tblAlarms LEFT OUTER JOIN tblUsersAlarms tua ON tblAlarms.Id=tua.MessageID WHERE @CurrentYear BETWEEN tblAlarms.BeginYear AND tblAlarms.EndYear AND @CurrentMonth BETWEEN tblAlarms.BeginMonth AND tblAlarms.EndMonth AND @CurrentDay BETWEEN tblAlarms.BeginDay AND tblAlarms.EndDay AND (@CurrentHour * 60 + @CurrentMinute) BETWEEN tblAlarms.BeginHour*60 + tblAlarms.BeginMinute AND tblAlarms.EndHour*60 + tblAlarms.EndMinute --AND (tua.UserID <> 128 AND tua.UserID IS NULL) and it returns : but if I unComment last line it does not return any record.How I can determine what messages that users has not been seen? thanks

    Read the article

  • Synchronizing non-DB SQL Server objects

    - by DigDoug
    There are a number of tools available for synchronizing Tables, Indexes, Views, Stored Procedures and objects within a database. (We love RedGate here, and throw a lot of money their way). However, I'm having a very difficult time finding tools that will help with Jobs, Logins and Linked Servers. Do these things exist? Am I missing something obvious?

    Read the article

  • SQL Server 05, which is optimal, LIKE %<term>% or CONTAINS() for searching large column

    - by Spud1
    I've got a function written by another developer which I am trying to modify for a slightly different use. It is used by a SP to check if a certain phrase exists in a text document stored in the DB, and returns 1 if the value is found or 0 if its not. This is the query: SELECT @mres=1 from documents where id=@DocumentID and contains(text, @search_term) The document contains mostly XML, and the search_term is a GUID formatted as an nvarchar(40). This seems to run quite slowly to me (taking 5-6 seconds to execute this part of the process), but in the same script file there is also this version of the above, commented out. SELECT @mres=1 from documents where id=@DocumentID and textlike '%' + @search_term + '%' This version runs MUCH quicker, taking 4ms compared to 15ms for the first example. So, my question is why use the first over the second? I assume this developer (who is no longer working with me) had a good reason, but at the moment I am struggling to find it.. Is it possibly something to do with the full text indexing? (this is a dev DB I am working with, so the production version may have better indexing..) I am not that clued up on FTI really so not quite sure at the moment. Thoughts/ideas?

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • What would be the optimal disk config for SQL Server 2008 R2?

    - by Kev
    We have a new Dell R710 server that came with the following storage configuration: 8 x 146GB SAS 10k 6Gbps disks 1 x Perc H700 Integrated Controller (2 x 4 disks - 2 ports each supporting 4 disks) What would be the optimal configuration if we were just after performance? What would be the optimal configuration if we were after performance but wanted data resilience. As per 2 above but with a hot standby disk? We plan to run Windows 2008 R2 and SQL Server 2008 R2. Maximising storage capacity isn't a prime concern.

    Read the article

  • Running Jetty under Windows Azure Using RoleEntryPoint in a Worker Role

    - by Shawn Cicoria
    This post is built upon the work of Mario Kosmiskas and David C. Chou’s prior postings – from here: http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx As Mario points out in his post, when you need to have more control over the process that starts, it generally is better left to a RoleEntryPoint capability that as of now, requires the use of a CLR based assembly that is deployed as part of the package to Azure. There were things I liked especially about Mario’s post – specifically, the ability to pull down the JRE and Jetty runtimes at role startup and instantiate the process using the extracted bits.  The way Mario initialized the java process (and Jetty) was to take advantage of a role startup task configured as part of the service definition.  This is a great quick way to kick off processes or tasks prior to your role entry point.  However, if you need access to service configuration values or role events, that’s where RoleEntryPoint comes in.  For this PoC sample I moved the logic for retrieving the bits for the jre and jetty to the worker roles OnStart – in addition to moving the process kickoff to the OnStart method.  The Run method at this point is there to loop and just report the status of the java process. Beyond just making things more parameterized, both Mario’s and David’s articles still form the essence of the approach. The solution that accompanies this post provides all the necessary .NET based Visual Studio project.  In addition, you’ll need: 1. Jetty 7 runtime http://www.eclipse.org/jetty/downloads.php 2. JRE http://www.oracle.com/technetwork/java/javase/downloads/index.html Once you have these the first step is to create archives (zips) of the distributions.  For this PoC, the structure of the archive requires that the root of the archive looks as follows: JRE6.zip jetty---.zip Upload the contents to a storage container (block blob), and for this example I used /archives as the location.  The service configuration has several settings that allow, which is the advantage of using RoleEntryPoint, the ability to provide these things via native configuration support from Azure in a worker role. Storage Explorer You can use development storage for testing this out – the zipped version of the solution is configured for development storage.  When you’re ready to deploy, you update the two settings – 1 for diagnostics and the other for the storage container where the /archives are going to be stored. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="HostedJetty" osFamily="2" osVersion="*"> <Role name="JettyWorker"> <Instances count="1" /> <ConfigurationSettings> <!--<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />--> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="JettyArchive" value="jetty-distribution-7.3.0.v20110203b.zip" /> <Setting name="StartRole" value="true" /> <Setting name="BlobContainer" value="archives" /> <Setting name="JreArchive" value="jre6.zip" /> <!--<Setting name="StorageCredentials" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>"/>--> <Setting name="StorageCredentials" value="UseDevelopmentStorage=true" />   For interacting with Storage you can use several tools – one tool that I like is from the Windows Azure CAT team located here: http://appfabriccat.com/2011/02/exploring-windows-azure-storage-apis-by-building-a-storage-explorer-application/  and shown in the prior picture At runtime, during role initialization and startup, Azure will call into your RoleEntryPoint.  At that time the code will do a dynamic pull of the 2 archives and extract – using the Sharp Zip Lib <link> as Mario had demonstrated in his sample.  The only different here is the use of CLR code vs. PowerShell (which is really CLR, but that’s another discussion). At this point, once the 2 zips are extracted, the Role’s file system looks as follows: Worker Role approot From there, the OnStart method (which also does the download and unzip using a simple StorageHelper class) kicks off the Java path and now you have Java! Task Manager Jetty Sample Page A couple of things I’m working on to enhance this is to extract the jre and jetty bits not to the appRoot but to a resource location defined as part of the service definition. ServiceDefinition.csdef <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="HostedJetty" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="JettyWorker"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> <Endpoints> <InputEndpoint name="JettyPort" protocol="tcp" port="80" localPort="8080" /> </Endpoints> <LocalResources> <LocalStorage name="Archives" cleanOnRoleRecycle="false" sizeInMB="100" /> </LocalResources>   As the concept matures a bit, being able to update dynamically the content or jar files as part of a running java solution is something that is possible through continued enhancement of this simple model. The Visual Studio 2010 Solution is located here: HostingJavaSln_NDA.zip

    Read the article

  • How can I grant read-only access to my SQL Server 2008 database?

    - by Adrian Grigore
    Hi, I'm trying to grant read-only access (in other words: select queries only) to a user account on my SQL Server 2008 R2 database. Which rights do I have to grant to the user to make this work? I've tried several kinds of combinations of permissions on the server and the database itself, but in all cases the user could still run update queries or he could not run any queries (not even select) at all. The error message I always got was The server principal "foo" is not able to access the database "bar" under the current security context. Thanks for your help, Adrian

    Read the article

  • Free online Windows AzureConf this Wednesday

    - by ScottGu
    This Wednesday, November 14th, we’ll be hosting Windows AzureConf – a free online event for and by the Windows Azure community.  It will be streamed online from 8:30am->5:00 PM PST via Channel 9, and you can watch it all for free. I’ll be kicking off the event with a Windows Azure overview in the morning (a great way to learn more about Windows Azure if you haven’t used it yet!), and following my talk the rest of the day will be full of excellent presentations from members of the Windows Azure community.  You can ask questions from them live and I think you’ll find the day an excellent way to learn more about Windows Azure – as well as hear directly from developers building solutions on it today. Click here to learn more about the event and register for free to watch it live.  Hope to see you there! Scott P.S. We will also make the presentations available for download after the event in case you miss them. 

    Read the article

  • Is there a simple way to backup and restore all Microsoft SQL Server database objects related to a p

    - by Nathan Hartley
    I would like to backup, not only the databases that belong to a particular application living on a shared server, but also, those things that get stored outside of the database; the server accounts, jobs, maintenance plans and whatever else I can't think of at the moment. This backup should be complete enough that it's corresponding restore will recreate the entire application on a different SQL server. This seems like a problem others must have dealt with in the past. So before I embark on creating custom Powershell scripts for each application, I have come to ask you... Can you help?

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >