Search Results

Search found 28517 results on 1141 pages for 'sql und pl sql'.

Page 388/1141 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • Runaway version store in tempdb

    - by DavidWimbush
    Today was really a new one. I got back from a week off and found our main production server's tempdb had gone from its usual 200MB to 36GB. Ironically I spent Friday at the most excellent SQLBits VI and one of the sessions I attended was Christian Bolton talking about tempdb issues - including runaway tempdb databases. How just-in-time was that?! I looked into the file growth history and it looks like the problem started when my index maintenance job was chosen as the deadlock victim. (Funny how they almost make it sound like you've won something.) That left tempdb pretty big but for some reason it grew several more times. And since I'd left the file growth at the default 10% (aaargh!) the worse it got the worse it got. The last regrowth event was 2.6GB. Good job I've got Instant Initialization on. Since the Disk Usage report showed it was 99% unallocated I went into the Shrink Files dialogue which helpfully informed me the data file was 250MB.  I'm afraid I've got a life (allegedly) so I restarted the SQL Server service and then immediately ran a script to make the initial size bigger and change the file growth to a number of MB. The script complained that the size was smaller than the current size. Within seconds! WTF? Now I had to find out what was using so much of it. By using the DMV sys.dm_db_file_space_usage I found the problem was in the version store, and using the DMV sys.dm_db_task_space_usage and the Top Transactions by Age report I found that the culprit was a 3rd party database where I had turned on read_committed_snapshot and then not bothered to monitor things properly. Just because something has always worked before doesn't mean it will work in every future case. This application had an implicit transaction that had been running for over 2 hours.

    Read the article

  • Cool SQL formatter tool

    - by AndyScott
    I have to deal with all types of code that was written by people from different organizations, different countries, using different languages, obviously standards are different across these sources.  One of the biggest headaches that I ahve come across is how people differ in the formatting of their SQL statements, specifically stored procs.  When you regularly get over 500 lines in a sproc, if the code is not formatted correctly, you can get lost trying to figure out where one nested BEGIN begins, and another nested END ends.  One of my co-workers showed me this site today that does a pretty damn good job of making sense of that type of code: http://www.dpriver.com/pp/sqlformat.htm.   This is a free website that offers a box to enter your nasty code, and click "Format SQL" and have it clean it for you.  I am sure that there are situations where this may not work, but given the code that I have been working with recently, it does a really good job.  There is a pay version with more options, including VS add-in, desktop component (with quickkeys to clean text in programs like notepad), the ability to output in HTML, and other stuff.  Heck, I watched a demo where the purchased version will take formatted SQL code and turn it into a generic Stringbuilder object embedded in a formatted. Yes, this seems like a shameless plug, but no, I have no relation/knowledge of anyone involved in the development of this product, it just seems useful.  Either way, I recommend checking out the free version.

    Read the article

  • Code formatter for SSMS

    - by blakmk
      I was searching recently for a code formatter for T-Sql and I came accross this nice little utility that I wanted to share: http://www.wangz.net/cgi-bin/pp/gsqlparser/sqlpp/sqlformat.tpl I've been dealing with a lot of legacy code latley and there is nothing I find more infuriating than unformatted code. This tool seems to work quite well. Just one click and it formats everything nicely. There is also a free web version.                                           This Web Page Created with PageBreeze Free HTML Editor

    Read the article

  • File Watcher Task

    The task will detect changes to existing files as well as new files, both actions will cause the file to be found when available. A file is available when the task can open it exclusively. This is important for files that take a long time to be written, such as large files, or those that are just written slowly or delivered via a slow network link. It can also be set to look for existing files first (1.2.4.55). The full path of the found file is returned in up to three ways: The ExecValueVariable of the task. This can be set to any String variable. The OutputVariableName when specified. This can be set to any String variable. The FullPath variable within OnFileFoundEvent. This is a File Watcher Task specific event.   Advanced warning of a file having been detected, but not yet available is returned through the OnFileWatcherEvent. This event does not always coincide with the completion of the task, as completion and the OnFileFoundEvent is delayed until the file is ready for use. This event indicates that a file has been detected, and that file will now be monitored until it becomes available. The task will only detect and report on the first file that is created or changes, any subsequent changes will be ignored. Task properties and there usages are documented below: Property Data Type Description Filter String Default filter *.* will watch all files. Standard windows wildcards and patterns can be used to restrict the files monitored. FindExistingFiles Boolean Indicates whether the task should check for any existing files that match the path and filter criteria, before starting the file watcher. IncludeSubdirectories Boolean Indicates whether changes in subdirectories are accepted or ignored. OutputVariableName String The name of the variable into which the full file path found will be written on completion of the task. The variable specified should be of type string. Path String Path to watch for new files or changes to existing files. The path is a directory, not a full filename. For a specific file, enter the file name in the Filter property and the directory in the Path property. PathInputType FileWatcherTask.InputType Three input types are supported for the path: Connection - File connection manager, of type existing folder. Direct Input - Type the path directly into the UI or set on the property as a literal string. Variable – The name of the variable which contains the path. Timeout Integer Time in minutes to wait for a file. If no files are detected within the timeout period the task will fail. The default value of 0 means infinite, and will not expire. TimeoutAsWarning Boolean The default behaviour is to raise an error and fail the task on timeout. This property allows you to suppress the error on timeout, a warning event is raised instead, and the task succeeds. The default value is false.   Installation The task is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the task to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Control Flow Items tab, and then check the File Watcher Task in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The File Watcher Task  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. File Watcher Task for SQL Server 2005 File Watcher Task for SQL Server 2008 File Watcher Task for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.16 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.14 - Fixed user interface bug. A migration problem caused the UI type editors to reference an old SQL 2005 assembly. (17 Nov 2008) Version 2.0.0.7 - SQL Server 2008 release. (20 Oct 2008) SQL Server 2005 Version 1.2.6.100 - Fixed UI bug with TimeoutAsWarning property not saving correctly. Improved expression support in UI. File availability detection changed to use read-only lock, allowing reduced permissions to be used. Corrected installed issue which prevented installation on 64-bit machines with SSIS runtime only components. (18 Mar 2007) Version 1.2.5.73 - Added TimeoutAsWarning property. Gives the ability to suppress the error on timeout, a warning event is raised instead, and the task succeeds. (Task Version 3) (27 Sep 2006) Version 1.2.4.61 - Fixed a bug which could cause a loop condition with an unexpected exception such as incorrect file permissions. (20 Sep 2006) Version 1.2.4.55 - Added FindExistingFiles property. When true the task will check for an existing file before the file watcher itself actually starts. (Task Version 2) (8 Sep 2006) Version 1.2.3.39 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Property type validation improved. (12 Jun 2006) Version 1.2.1.0 - SQL Server 2005 IDW 16 Sept CTP. Futher UI enhancements, including expression indicator. Fixed bug caused by execution within loop Subsequent iterations detected the same file as the first iteration. Added IncludeSubdirectories property. Fixed bug when changes made in subdirectories, and folder change was detected, causing task failure. (Task Version 1) (6 Oct 2005) Version 1.2.0.0 - SQL Server 2005 IDW 15 June CTP. Changes made include an enhanced UI, the PathInputType property for greater flexibility with path input, the OutputVariableName property, and the new OnFileFoundEvent event. (7 Sep 2005) Version 1.1.2 - Public Release (16 Nov 2004) Screenshots   Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005 and SQL Server 2008. If you an error when you try and use the task along the lines of The task with the name "File Watcher Task" and the creation name ... is not registered for use on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This cache is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The task with the name "File Watcher Task" and the creation name "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask, Konesans.Dts.Tasks.FileWatcherTask, Version=1.2.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b" is not registered for use on this computer. Contact Information: File Watcher Task A similar error message can be shown when trying to edit the task if the Microsoft Exception Message Box is not installed. This useful component is installed as part of the SQL Server Management Studio tools but occasionally due to the custom options chosen during SQL Server 2005 setup it may be absent. If you get an error like Could not load file or assembly 'Microsoft.ExceptionMessageBox.. you can manually download and install the missing component. It is available as part of the Feature Pack for SQL Server 2005 release. The feature packs are occasionally updated by Microsoft so you may like to check for a more recent edition, but you can find the Microsoft Exception Message Box download links here - Feature Pack for Microsoft SQL Server 2005 - April 2006 If you encounter this problem on SQL Server 2008, please check that you have installed the SQL Server client components. The component is no longer available as a separate download for SQL Server 2008  as noted in the Microsoft documentation for Deploying an Exception Message Box Application The full error message is shown below for reference, although note that the Version will change between SQL Server 2005 and SQL Server 2008: TITLE: Microsoft Visual Studio ------------------------------ Cannot show the editor for this task. ------------------------------ ADDITIONAL INFORMATION: Could not load file or assembly 'Microsoft.ExceptionMessageBox, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. (Konesans.Dts.Tasks.FileWatcherTask) Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed. Sample Code If you wanted to use the task programmatically then here is some sample code for creating a basic package and configuring the task. It uses a variable to supply the path to watch, and also sets a variable for the OutputVariableName. Once execution is complete it writes out the file found to the console. /// <summary> /// Create a package with an File Watcher Task /// </summary> public void FileWatcherTaskBasic() { // Create the package Package package = new Package(); package.Name = "FileWatcherTaskBasic"; // Add variable for input path, the folder to look in package.Variables.Add("InputPath", false, "User", @"C:\Temp\"); // Add variable for the file found, to be used on OutputVariableName property package.Variables.Add("FileFound", false, "User", "EMPTY"); // Add the Task package.Executables.Add("Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask, " + "Konesans.Dts.Tasks.FileWatcherTask, Version=1.2.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set basic properties taskHost.Properties["PathInputType"].SetValue(taskHost, 1); // InputType.Variable taskHost.Properties["Path"].SetValue(taskHost, "User::InputPath"); taskHost.Properties["OutputVariableName"].SetValue(taskHost, "User::FileFound"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif // Display variable value before execution to check EMPTY Console.WriteLine("Result Variable: {0}", package.Variables["User::FileFound"].Value); // Execute package package.Execute(); // Display variable value after execution, e.g. C:\Temp\File.txt Console.WriteLine("Result Variable: {0}", package.Variables["User::FileFound"].Value); // Perform simple check for execution errors if (package.Errors.Count > 0) foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } else Console.WriteLine("Success - {0}", package.Name); // Clean-up package.Dispose(); } (Updated installation and troubleshooting sections, and added sample code July 2009)

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • Oracle Alliances & Channels wünscht allen Partnern ein frohes Fest!

    - by A&C Redaktion
    Endlich!Die letzten Projekte sind erledigt, die Geschenke verpackt und auch in den Sphären des Web 2.0 soll nun etwas Ruhe und Weihnachtsstimmung einkehren. Ich möchte daher gar nicht viele Worte verlieren: Es liegt ein arbeitsreiches Jahr hinter uns und es galt viele Herausforderungen zu meistern. Vor allem in der eigentlich geruhsamen Adventszeit wird es viel zu hektisch. Aber gerade dann ist es umso wichtiger, kurz innezuhalten. Wir haben viel geschafft und sollten nun, für die Festtage, auch die Bürotür im Kopf schließen. Dann kann es endlich Weihnachten werden!Im Namen von Oracle Alliances & Channels bedanke ich mich für die gute Zusammenarbeit. Ich wünsche Ihnen und Ihren Lieben ein frohes Fest und schöne, erholsame Feiertage!Herzlich,Ihre Silvia KaskeSenior Director Channel Sales & AlliancesORACLE Deutschland B.V. & Co. KG

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • Remote Access to MSSQL Database From 1&1 Hosting [duplicate]

    - by Zerkey
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I just paid ($6 /month) for shared Windows hosting through 1&1 hosting. I was having trouble connecting to my database from home, so I sent an email to support. I received the following response: As we checked your concern here in our end, please be advised that due to limitation of Shared Hosting services, there is no option to connect the database to your SQL Management Studio or through Visual Studio. It is only possible for Dedicated Server package. You may only access the database using MyLittleAdmin at the Control Panel. A dedicated server is like $200 per month! What is the point of having database access only through a web console? I feel I am missing something here, or maybe the support agent is. Is there a way to access my MS SQL database on their servers through Visual Studio or SQL Management Studio from my machine? If not, is there a web host who allows this for less than $200 a month? EDIT: Marked as duplicate... I'm not asking for a list of web hosts, I'm asking how to remotely connect to my MSSQL database through 1&1's services.

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Tale of an Encrypted SSIS Package in msdb and a Lost Password

    - by Argenis
      Yesterday a Developer at work asked for a copy of an SSIS package in Production so he could work on it (please, dear Reader – withhold judgment on Source Control – I know!). I logged on to the SSIS instance, and when I went to export the package… Oops. I didn’t have that password. The DBA who uploaded the package to Production is long gone; my fellow DBA had no idea either - and the Devs returned a cricket sound when queried. So I posed the obligatory question on #SQLHelp and a bunch of folks jumped in – some to help and some to make fun of me (thanks, @SQLSoldier @crummel4 @maryarcia and @sqljoe). I tried their suggestions to no avail…even ran some queries to see if I could figure out how to extract the package XML from the system tables in msdb:   SELECT CAST(CAST(p.packagedata AS varbinary(max)) AS varchar(max)) FROM msdb.dbo.sysssispackages p WHERE p.name = 'LePackage'   This just returned a bunch of XML with encrypted data on it:  I knew there was a job in SQL Agent scheduled to execute the package, and when I tried to look at details on the job step I got the following: Not very helpful. The password had to be saved somewhere, but where?? All of a sudden I remembered that there was a system table I hadn’t queried yet: SELECT sjs.command FROM msdb.dbo.sysjobs sj JOIN msdb.dbo.sysjobsteps sjs ON sj.job_id = sjs.job_id WHERE sj.name = 'Run LePackage' The result: “Well, that’s really secure”, I thought to myself. Cheers, -Argenis

    Read the article

  • Oracle Developer Day: "Die Oracle Datenbank in der Praxis"

    - by Ulrike Schwinn (DBA Community)
    Im neuen Jahr finden wieder Oracle Developer Days in verschiedenen Städten statt!  In dieser speziell von der BU DB zusammengestellten Veranstaltung erfahren Sie viele Tipps und Tricks aus der Praxis und werden zu folgenden Themen auf den neuesten Stand gebracht: - Die Unterschiede der Editionen und ihre Geheimnisse - Umfangreiche Basisausstattung auch ohne Option - Performance und Skalierbarkeit in den einzelnen Editionen - Kosten- und Ressourceneinsparung leicht gemacht - Sicherheit in der Datenbank - Steigerung der Verfügbarkeit mit einfachen Mitteln - Der Umgang mit großen Datenmengen - Cloud Technologien in der Oracle Datenbank Ein Ausblick auf die Funktionen der für 2013 geplanten neuen Datenbank-Version rundet den Workshop ab. Termine, Agenda,Veranstaltungsorte und Anmeldung finden Sie hier. Melden Sie sich noch heute zur Veranstaltung an - die Teilnahme ist kostenlos!

    Read the article

  • Colour coding of the status bar in SQL Server Management Studio - Oh dear

    - by simonsabin
    The new feature in SQL Server 2008 to have your query window status bar colour coded to the server you are on is great. Its a nice way to distinguish production from development servers. Unfortunately it was pointed out to me by a client recently that it doesn't always work. To me that sort of makes it pointless. Its a bit like having breaks that work some of the time. Are you going to place Russian roulette every time you execute the query. Whats more the colour doesn't change if you change the connection. So you can flip between dev and production servers but your status bar stays the colour you set for the dev server. It really annoys me to find features that sort of work. The reason I initially gave up on SQLPrompt was that it didn't work 100% of the time and for that time it didn't work I wasted so much time trying to get it to work I wasted more time than if I didn't have it. (I will say that was 2-3 years ago). If you would like to use this feature but aren't because of these features please vote on these bugs. https://connect.microsoft.com/SQLServer/feedback/details/504418/ssms-make-color-coding-of-query-windows-work-all-the-time https://connect.microsoft.com/SQLServer/feedback/details/361832/update-status-bar-colour-when-changing-connections  

    Read the article

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

  • Klar im Vorteil mit Oracle Enablement 2.0!

    - by A&C Redaktion
    Oracle Enablement 2.0 enthält Schulungsangebote, die Oracle Partner angepasst an Ihre jeweilige Arbeitssituation vor Ort oder online effizient nutzen können. All diese Angebote unterstützen Oracle Partner bei der Entwicklung und dem Ausbau Ihrer Vertriebsstärke sowie zur Vertiefung der Implementierungskenntnisse.Bleiben Sie als Oracle Partner immer am Ball und informieren Sie sich regelmäßig, wie Sie notwendiges Know-How für die OPN Spezialisierung und die zugehörigen Assessments im Unternehmen aufbauen können.Das Oracle Country Enablement Team hilft Oracle Partnern bei der Spezialisierungsausbildung und der individuellen Beratung. Aktuelle Informationen zu Training und Spezialisierung finden Sie auf unserem Enablement Blog, den Frank Lauer und Corry Weick Ihnen im Video kurz vorstellen.

    Read the article

  • Tipp -> Gutes Kurzvideo zu Real Application Cluster - Active Data Guard - Hochverfügbarkeit

    - by britta wolf
    In diesem 7 minütigen Video wird das Thema Hochverfügbarkeit mit Oracle Real Application Cluster (RAC) und Active Data Guard kurz und knackig erklärt. Das Video wurde von unseren Potsdamer-Kollegen aus dem DTCC erstellt. Reinschauen lohnt sich! Wir werden von Zeit zu Zeit weitere Kurzvideos auf unseren Academy-Blog stellen Wofür steht eigentlich die Abkürzung DTCC? Oracle Direct Technology Customer Center Eine Abteilung mit Systemberatern in Potsdam. Unsere Kollegen betreuen Kunden rund um Oracle-Lösungen aus dem Bereich Datenbank und Middleware. Die aktuellen Herausforderungen werden gemeinsam mit Kunden analysiert und eine ideale Lösung sowie eine Roadmap dorthin entwickelt. Hierbei geht es sowohl um Erläuterungen zu Features und Funktionen, als auch um IT-Architektur und letztendlich um den optimalen Einsatz der Oracle-Lösungen.

    Read the article

  • Die Tape Library, die mitwächst

    - by A&C Redaktion
    Mit der Storage Tek SL150 Modular Tape Library hat Oracle eine Archiv-Lösung entwickelt, die zusammen mit dem Unternehmen wachsen kann. Die Ziele waren hoch gesteckt: Die neue Bandbibliothek sollte nicht nur extrem skalierbar, sondern auch günstig sein, denn sie ist als Einstiegs-Library für kleinere, wachsende und mittelständische Firmen gedacht. Zum Launch der Tape Library legt Oracle beeindruckende Zahlen und Fakten vor: - 75% günstiger in der Anschaffung, als vergleichbare Produkte - platzsparend durch 40% höhere Dichte - höchste Sicherheitsstandards - erweiterbar von 30 auf bis zu 300 Slots, und damit 900 Terabyte - einfache Bedienung dank intuitiver Benutzeroberfläche auf Basis der Oracle Fusion Middleware und Oracle Linux - die Installation dauert nur 30 Minuten - unterstützt viele verschiedene Systemumgebungen Partner haben die Möglichkeit, zu diesem neuen Mitglied der Oracle Produktfamilie eigene Support Services anzubieten. Details zu den Resell und Support Anforderungen finden Sie hier (mit OPN-Login): SL150 Produktübersicht Partner Support Option mit StorageTek SL150 Modular Tape Library FAQ - Partner Support Option mit StorageTek SL150 Modular Tape Library Auch die englischsprachige Pressemitteilung zum Launch bietet ausführliche Informationen und Details, von den Maßen bis zum Energieverbrauch, finden Sie hier im Storage Tek SL150 Data Sheet. Natürlich wollen wir Ihnen die ersten Stimmen aus der deutschsprachigen Fachpresse zur Storage Tek SL 150 nicht vorenthalten: SpeicherguideIT SecCityIT AdministratorDOAG

    Read the article

  • Die Tape Library, die mitwächst

    - by A&C Redaktion
    Mit der Storage Tek SL150 Modular Tape Library hat Oracle eine Archiv-Lösung entwickelt, die zusammen mit dem Unternehmen wachsen kann. Die Ziele waren hoch gesteckt: Die neue Bandbibliothek sollte nicht nur extrem skalierbar, sondern auch günstig sein, denn sie ist als Einstiegs-Library für kleinere, wachsende und mittelständische Firmen gedacht. Zum Launch der Tape Library legt Oracle beeindruckende Zahlen und Fakten vor: - 75% günstiger in der Anschaffung, als vergleichbare Produkte - platzsparend durch 40% höhere Dichte - höchste Sicherheitsstandards - erweiterbar von 30 auf bis zu 300 Slots, und damit 900 Terabyte - einfache Bedienung dank intuitiver Benutzeroberfläche auf Basis der Oracle Fusion Middleware und Oracle Linux - die Installation dauert nur 30 Minuten - unterstützt viele verschiedene Systemumgebungen Partner haben die Möglichkeit, zu diesem neuen Mitglied der Oracle Produktfamilie eigene Support Services anzubieten. Details zu den Resell und Support Anforderungen finden Sie hier (mit OPN-Login): SL150 Produktübersicht Partner Support Option mit StorageTek SL150 Modular Tape Library FAQ - Partner Support Option mit StorageTek SL150 Modular Tape Library Auch die englischsprachige Pressemitteilung zum Launch bietet ausführliche Informationen und Details, von den Maßen bis zum Energieverbrauch, finden Sie hier im Storage Tek SL150 Data Sheet. Natürlich wollen wir Ihnen die ersten Stimmen aus der deutschsprachigen Fachpresse zur Storage Tek SL 150 nicht vorenthalten: SpeicherguideIT SecCityIT AdministratorDOAG

    Read the article

  • 404 Not Found for a PL script that exists!

    - by Abs
    Hello all, I make a GET request to a CGI script and I get a 404 error. However, I am 100% sure that script is present and it has permissions: -rwxr-xr-x 1 apache apache 6520 Sep 7 03:01 uu_ini_status_audios.pl The request URL is: http://mysite.com/cgi-bin/uu_ini_status_audios.pl?tmp_sid=893facacc5dc392ad0f4c91e6a9e8d40&rnd_id=0.12266222834382812 The error I get: The requested URL /cgi-bin/uu_ini_status_audios.pl was not found on this server. This use to work for me before, but I think it stopped working after I restarted apache so maybe it means its a configuration I changed?? I checked the error logs for apache and php and nothing useful was found to help me with my problem! I appreciate any help on this!

    Read the article

  • SQL Saturday Atlanta: Intro To Performance Tuning

    - by Mike Femenella
    I'm looking forward to speaking in Atlanta on the 24th, will be fun to get back down that way to visit with some friends and present two topics that I really enjoy. First, an introduction to performance tuning. Performance tuning is a very wide and deep topic and we're staying close to the surface. I direct this class for newbie sql users who have less than 2 years of experience. It's all the things I wish someone would have told me in my first 2 years about what to look for when the database was slow...or allegedly slow I should say. We'll cover using profiler to find slow performing queries and how to save the data off to a table as well as a tour of other features. The difference between clustered, non clustered and covering indexes. How to look at and understand an execution plan (at a high level) and finally the difference between a temp table and a table variable and what the implications are of using either one in your code. That pretty much takes up a full hour. Second presentation, Loading Data in Real Time. It's really a presentation about partitioning but with a twist that we used at work recently to solve a need to load some data quickly and put it into production with minimal downtime. We'll cover partition functions, schemes,$partition, merge, sys.partitions and show some examples of building a set of partitioned tables and using the switch statement to move it from one table to another. Finally we'll cover the differences in partitioning between 2005 and 2008. Hope to see you there! And if you read my blog please introduce yourself!

    Read the article

  • New SQL Azure Development Accelerator Core promotional offer announced

    - by Eric Nelson
    This is (almost) a straight copy and paste but represents an important announcement worthy of a little more “exposure” :-) Starting August 1, 2010, we will release a new SQL Azure Development Accelerator Core promotional offer.  This new offer will give you the flexibility to purchase commitment quantities of SQL Azure Business Edition databases independent of other Windows Azure platform services at a deeply discounted monthly price.  The offer is valid only for a six month term.  You may purchase in 10 GB increments the amount of our Business Edition relational database that you require (each Business Edition database is capable of storing up to 50 GB).  The offer price will be $74.95 per 10 GB per month.  This promotional offer represents 25% off of our normal consumption rates.  Monthly Business Edition relational database usage exceeding the purchased commitment amount and usage for other Windows Azure platform services for this offer will be charged at our normal consumption rates.  Please click here for full details of our new SQL Azure Development Accelerator Core offer.  Related Links: Details of 5GB and 50GB databases have been released http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

  • How do I run the sphere-slicer.pl perl command to make a photo into a sphere?

    - by Mahdi Zenali
    I was looking for a program to slice pictures somehow to paste it on a globe(sphere). I found ip-slicer in this website. http://www.bruno.postle.net/2001/ip-slicer/ The problem I have is that I don't know where should I enter the command line. for example after running the program and entering this line "sphere-slicer.pl 16 1000 input.jpg" I get this this error Number found where operator expected at - line 72, near "pl 16" (Do you need to predeclare pl?) Number found where operator expected at - line 72, near "16 1000" (Missing operator before 1000?) Bareword found where operator expected at - line 72, near "1000 input" (Missing operator before input?) This program is written in perl language.

    Read the article

  • Oracle's Cloud Strategie nach der OOW 2012

    - by Manuel Hossfeld
    Auf der diesjährigen Oracle Open World war „die Cloud“ nicht nur ein vielbenutztes Buzzword, sondern auch Anlass für einige interessante Ankündigungen. Wer keine Zeit oder Muße hatte, sich die entsprechenden Keynotes von Larry Ellison und Thomas Kurian anzuhören, erfährt in diesem Artikel die wesentlichen Änderungen. Die erste Neuerung: Oracle wird in Zukunft alle drei „Sorten“ bzw. „Ebenen“ von Cloud Computing anbieten: SaaS (Software as a Service) – die Bereitstellung von kompletten Fachanwendungen z.B. aus der eBusiness Suite in Form eines Mietmodells - gab es schon länger. Abgesehen von der Tatsache, dass hier zusätzliche/neuere Komponenten und Module der durch die letzten Zukäufe von Oracle noch breiter gewordenen Palette angeboten werden, ändert sich am Prinzip nichts. Bei PaaS (Plattform as a Service) sind vor allem die beiden bereits letztes Jahr angekündigten Dienste „Database Service“ (basierend auf APEX) und „Java Service“ (basierend auf Weblogic) zu nennen, für die nun auch konkrete Pakete und Preise (ca.175$ bis 2000$/Monat) sowie die Möglichkeit zur Anmeldung auf http://cloud.oracle.com vorliegen. Interessanterweise gehört auch ein sog. „Social Service“ in diese Schicht, mit der Oracle Kunden ihre Anwendungen in Zukunft auf standardisierte Weise durch Social Networking Funktionalität wie z.B. Microblogging erweitern können.Ebenso neu angekündigt wurde ein "Developer Service", welcher z.B. Sourcecode-Verwaltung durch GIT Repositories sowie Wikis und Issue Tracking bereit stellen soll. Die dort mittels JDeveloper, Netbeans oder Eclipse erstellten Applikationen können dann nahtlos innerhalb kürzester Zeit in den Java Service deployed werden. Komplett neu und für einige sicher überraschend ist hingegen der Bereich IaaS (Infrastructure as a Service) – Hier geht es um die Bereitstellung von Basis-Infrastrukturkomponenten wie Storage, Rechenleistung (letztlich also Betriebssysteme / VMs) und Messaging / Queueing. Genaue Details oder Preise zu den IaaS Angeboten sind noch nicht bekannt, aber zumindest zu den Storage- und Messaging Services können grundlegende Daten bereits auf http://cloud.oracle.com eingesehen werden Die zweite Neuerung: Kunden können in Zukunft als Alternative zum Betrieb der o.g. „Oracle Cloud“, diese auch komplett hinter ihrer eigenen Firewall aufbauen lassen. Mit anderen Worten: Oracle baut und betreibt bei diesem als „Oracle Private Cloud“ bezeichneten Angebot alle Komponenten selbst – die Daten verlassen aber niemals das Gebäude des Kunden. Letzteres ist gerade bei uns im Datenschutz-sensiblen Deutschland ein wichtiger Aspekt. Da die verwendeten Komponenten in beiden Fällen die gleichen sind, ist auch ein „Umziehen“ oder Erweitern der Private Cloud in die Public Cloud (oder zurück) ohne Änderungen an den Anwendungen möglich. Der Möglichkeit einer "Hybrid Cloud", bei der Teile einer Anwendung hinter der eigenen Firewall, andere Teile aber in der Oracle Cloud laufen, wird damit Realität.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >