Search Results

Search found 10442 results on 418 pages for 'my blog'.

Page 130/418 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Suggestion: ALLFILES option for RESTORE

    - by Greg Low
    The default action when performing a backup is to append to the backup file yet the default action when restoring a backup is to restore just the first file.I constantly come across customer situations where they are puzzled that they seem to have lost data after they have completed a restore. Invariably, it's just that they haven't restored all the backups contained within a single OS file. This happens most commonly with log backups but also happens when they have not restored the most recent database backup file.It is not trivial to achieve this within simple T-SQL scripts, when the number of backup files within the OS file is unknown. It really should be.I'd like to see a FILES=ALLFILES option on the RESTORE command. For RESTORE DATABASE, it should restore the most recent database backup plus any subsequent log files. For RESTORE LOG (which is the most important missing option), it should just restore all relevant log backups that are contained.If you agree, you know what to do: please vote:  https://connect.microsoft.com/SQLServer/feedback/details/769204/option-to-restore-all-backups-files-within-a-media-setAlternately, how would you write a T-SQL command to restore all log backups within a single OS file where the number of files is unknown? Would love to hear creative solutions because all the ones that I think of are pretty messy and need dynamic SQL. 

    Read the article

  • How to start with PowerPivot for Excel

    - by Marco Russo (SQLBI)
    Now that Office 2010 has been released, many people will start looking for resources to start learning PowerPivot. Of course, the book I’m writing will be helpful when it will be published (September 2010), but you can also start with some online content on Microsoft sites. First of all, this is the web site dedicated to PowerPivot: http://www.powerpivot.com/ It contains several videos and demos and it’s also possible to use a Virtual Lab without installing Office 2010 on your PC. Then, there is...(read more)

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Speaker Prep Tip: Use the AV Studio Built into that Laptop

    - by merrillaldrich
    Over at erinstellato.com there is a great post this week about tips for new presenters. Ms. Stellato suggests, insightfully, that we record ourselves, which is really a fantastic piece of advice. What’s extra-cool is that today you don’t need any special equipment or expensive software to do just that. This week I “filmed” two run-throughs of my talk for SQL Saturday tomorrow. For me, the timing is the hardest thing – figuring out how much content I can really present in the time allowed without...(read more)

    Read the article

  • DevWeek Slides & Demos available for download

    - by Davide Mauri
    Anyone interested can download Slides, Demos and Demo Database (WhitepagesDB) of my “SQL Server best practices for developers” session here: http://www.davidemauri.it/resources/slide--demos.aspx Happy Downloading! :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • &quot;CLR Enabled&quot; is not required to use CLR built-ins

    - by AaronBertrand
    Books Online articles referencing built-in CLR functions (such as FORMAT() ) have a remark similar to the following: "FORMAT relies on the presence of .the .NET Framework Common Language Runtime (CLR)." A lot of people seem to interpret this as meaning: "You must enable the sp_configure option 'CLR enabled' in order to use FORMAT()." Some then go on and suggest you run code similar to the following before you play with these functions: EXEC sp_configure 'show advanced options' , 1 ; GO RECONFIGURE...(read more)

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Redirecting from blogger to custom domain [closed]

    - by mdhar9e
    Possible Duplicate: How to have a blogspot blog in my domain? i have a blog from blogger named as www.myclipta.blogspot.com. i am updating regulary. Then i bought a custom domain with myclipta.com. Now i want to redirect from blogger domain to my custom domain. i don't know how to do this . i heard that to set dns name servers and CNAME..But i am not able to do this.. can any one can guide me please..

    Read the article

  • C2C - Customer 2 Cloud Program

    - by Hartmut Wiese
    What´s in it for partners? A special Webinar for EMEA partners The Blog Entry is referring to this EMEA CRM Community blog entry here. The new Oracle Customer 2 Cloud (C2C) Program offers sizeable CX Cloud business opportunities for our partners into their existing Siebel, Peoplesoft or Oracle eBusiness Suite customers installed base, leveraging financial incentives that allow customers switching part of their On Premises solutions' maintenance fees against Cloud subscriptions from the market leading provider of CX Cloud business solutions. Look at this introduction video to have a first feeling about the C2C program and then join us on Tuesday June 10th at 9am CET (8am UK) to find out how you and your customers can benefit from this program to secure existing Siebel, Peoplesoft or Oracle eBusiness Suite accounts while generating new business opportunities. Register here! added by Hartmut Wiese: JD Edwards is not explicitly mentioned for this program but I also did not found a remark that it is not included.

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • StreamInsight will not push feature releases through Microsoft Update going forward

    - by Roman Schindlauer
    Until now, we've released StreamInsight through the Microsoft Download Center, and also released it out through Microsoft Update. Going forward, we will only release new StreamInsight versions through the Microsoft Download Center and only use MU to release service packs and security fixes (should any be needed). As a result of this decision, we are pulling off the recent StreamInsight 2.1 release from MU; this release is still available in Download Center. Don’t worry: there’s nothing wrong with the versions we’ve shipped in MU, we’ve just adjusted how we use MU. There is no action necessary from our customers as a result of this change, and we are not rolling back any changes to your current installation, so if you have installed StreamInsight 2.1 recently through the Microsoft Update, they will still work fine. Regards, The StreamInsight Team

    Read the article

  • [Speaking] PowerShell at the PASS Summit

    - by AllenMWhite
    Next week is the annual PASS Summit , the event of the year for those of us in the SQL Server community. We get to see our old friends, make new friends, and learn an amazing amount about SQL Server, and it'll be in Seattle, so it's close to the mother ship. I love having Microsoft close, because it's easier to get to know the people who actually make this amazing product we spend our lives working with. This year I'm fortunate to have been selected to present three sessions. One is a regular session...(read more)

    Read the article

  • New code release today - 2011.1.4.2

    - by Steve Tunstall
    Wow, two blog entries in the same day! When I wrote the large 'Quota' blog entry below, I did not realize there would be a micro-code update going out the same evening. So here it is. Code 2011.1.4.2 has just been released. You can get the readme file for it here: https://wikis.oracle.com/display/FishWorks/ak-2011.04.24.4.2+Release+Notes Download it, of course, through the MOS website. It looks like it fixes a pretty nasty bug. Get it if you think it applies to you. Unless you have a great reason NOT to upgrade, I would strongly advise you to upgrade to 2011.1.4.2. Why? Because the readme file says they STRONGLY RECOMMEND YOU ALL UPGRADE TO THIS CODE IMMEDIATELY using LOTS OF CAPITAL LETTERS. That's good enough for me. Be sure to run the health check like the readme tells you to. 

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • Ranking with PowerPivot – a different approach

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting post about a “different approach” in creating a ranking measure with PowerPivot . If you know DAX or you read our book , you will find that a DAX expression can solve the issue. However, such a formula is more complex than necessary. The next version of PowerPivot might have more built-in DAX functions and should solve the ranking need with a simpler formula. In the meantime, it is interesting to know a different approach that relies on Excel skills instead of...(read more)

    Read the article

  • QueryUnit 0.0.0.8 – Trust No One

    - by Davide Mauri
    Yesterday I’ve release an updated version of QueryUnit, the version 0.0.0.8. QueryUnit now supports AreNotEqual, Greater, and Less assertions and is more capable of managing strings results. I must say that I cannot live anymore without a proper Unit Testing of a BI solution. Just yesterday happened that one of the unit tests at a customer site failed showing a subtle situation where the release of a new version of custom application would have corrupted the source of BI data with a very low chance that someone would have noticed it before several days. It may happen when you have more the 15 systems that handles the data needed by your BI solution. The key message of this situation is “Trust No One”: if your data hasn’t passed quality testing it’s not trustable. Period. QueryUnit is now officialy an hero :) No superpowers still, but useful above all. http://queryunit.codeplex.com/ Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • TechEd 2014 Day 1

    - by John Paul Cook
    Today at TechEd 2014, many people had questions about the in-memory database features in SQL Server 2014. A common question is how an in-memory database is different from having a database on a SQL Server with an amount of ram far greater than the size of the database. In-memory or memory optimized tables have different data structures and are accessed differently using a latch free and lock free approach that greatly improves performance. This provides part of the performance improvement. The rest...(read more)

    Read the article

  • Presenting to the New England SQL Server Users Group 10 Jun 2010!

    - by andyleonard
    I am honored to present Applied SSIS Design Patterns to the New England SQL Server Users Group on 10 Jun 2010! This is a reprise of the spotlight session presented at the PASS Summit 2009. Abstract "Design Patterns" is more than a trendy buzz phrase; design patterns are a way of breaking down complex development projects into manageable tasks. They lend themselves to several development methodologies and apply to SSIS development. Chances are you're using your own design patterns now! In this spotlight...(read more)

    Read the article

  • TechEd 2014 Day 4

    - by John Paul Cook
    Many people visiting the SQL Server booth wanted to know how to improve performance. With so much attention being given to COLUMNSTORE and in-memory tables and stored procedures, it is easy to overlook how important tempdb is to performance. Speeding up tempdb I/O improves performance. The best way to do this is to not do the I/O in the first place. With SQL Server 2014, tempdb page management is smarter. Pages are more likely to be released before being unnecessarily flushed to disk. Read more about...(read more)

    Read the article

  • Speaking at PASS (and a plug for two other conferences)

    - by drsql
    So I was notified a few days ago that one of my sessions was selected, and one is an alternate. Luckily, it was the one that I have the most experience with, and the alternate is my latest session that I am really quite happy with after doing it virtually and now at the SQL Saturday in Columbus. The selected session is: Database Design Fundamentals In this session I will give an overview of how to design a database, including the common normal forms and why they should matter to you if you are creating...(read more)

    Read the article

  • Utility Queries–Structure of Tables with Identity Column

    - by drsql
    I have been doing a presentation on sequences of late (last planned version of that presentation was last week, but should be able to get the gist of things from the slides and the code posted here on my presentation page), and as part of that, I started writing some queries to interrogate the structure of tables. I started with tables using an identity column for some purpose because they are considerably easier to do than sequences, specifically because the limitations of identity columns make...(read more)

    Read the article

  • Oracle Partner Specialists – Sell & Deliver High Value Products to Customers

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Do you want to know where to find useful information about partner training and other activities to complete Oracle Specialization available in the country you are personally based? Go to the EMEA partner enablement blog and read latest information regarding training opportunities ready to join for Cloud Services, Applications, Business Intelligence, Middleware, Database 12c, Engineered System as well as Server & Storage. Recently, we announced new TestFest events in France, which you can join to pass your own Implementation Assessment within the Specialization category you have already chosen. To find out where and when the next TestFest close to your location will take place, please contact [email protected] or watch out for further announcements of TestFest events in your home country. Turnback to the EMEA Partner Enablement Blog from time to time to update your own Specialization and join the latest training for Sales, Presales or Implementation Specialists:  https://blogs.oracle.com/opnenablement/

    Read the article

  • OpenStack: A starting point to learn more

    - by uwes
    Most of you have heard about OpenStack and the annouced integration into Oracle Solaris 11.2 and about OpenStack support for Oracle Linux and Oracle VM. These are two good reasons to start to learn more about OpenStack. Ronen Kofman starts a series of articles on his Blog (Ronen Kofman's Blog) to provide more knowledge regarding OpenStack. First article of the series is called: "Diving into OpenStack Network Architecure - Part 1". You are invited to follow Ronen through his articles where he shows how the different pieces come together and provides a bigger picture of the network architecture in OpenStack.

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >