Search Results

Search found 7465 results on 299 pages for 'team leadership'.

Page 74/299 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • The one feature that would make me invest in SSIS 2012

    - by Peter Larsson
    This week I was invited my Microsoft to give two presentations in Slovenia. My presentations went well and I had good energy and the audience was interacting with me. When I had some time over from networking and partying, I attended a few other presentations. At least the ones who where held in English. One of these was "SQL Server Integration Services 2012 - All the News, and More", given by Davide Mauri, a fellow co-worker from SolidQ. We started to talk and soon came into the details of the new things in SSIS 2012. All of the official things Davide talked about are good stuff, but for me, the best thing is one he didn't cover in his presentation. In earlier versions of SSIS than 2012, it is possible to have a stored procedure to act as a data source, as long as it doesn't have a temp table in it. In that case, you will get an error message from SSIS that "Metadata could not be found". This is still true with SSIS 2012, so the thing I am talking about is not really a SSIS feature, it's a SQL Server 2012 feature. And this is the EXECUTE WITH RESULTSETS feature! With this, you can have a stored procedure with a temp table to deliver the resultset to SSIS, if you execute the stored procedure from SSIS and add the "WITH RESULTSETS" option. If you do this, SSIS is able to take the metadata from the code you write in SSIS and not from the stored procedure! And it's very fast too. Let's say you have a stored procedure in earlier versions and when referencing that stored procedure in SSIS forced SSIS to call the stored procedure (which can take hours), to retrieve the metadata. Now, with RESULTSETS, SSIS 2012 can continue in milliseconds! This is because you provide the metadata in the RESULTSETS clause, and if the data from the stored procedure doesn't match this RESULTSETS, you will get an error anyway, so it makes sense Microsoft has provided this optimization for us.

    Read the article

  • Moving from local machine to group web development environment

    - by Djave
    I'm a freelancer who currently creates websites locally using something like MAMP to test websites locally before pushing them live with FTP. I'm looking at taking on my first employee, and I would need to be able to work on websites with them simultaneously. Can anyone explain or provide links to some good documentation on team workflow, or some key phrases I should be googling to get started on my set up? Unlike a lot of the stackoverflow community I've never worked in a dev team, large or small as I'm self taught so just need to know where to start. At present I'm thinking I need an extra computer to use as a server, then use Git or some such to version control files on that computer, as well as installing apache on it so it can be viewed by any computers in my current home network. Is this heading down the right track?

    Read the article

  • Error: type or namespace name 'AssemblyKeyFileAttribute' and 'AssemblyKeyFile' could not be found

    To associate an assembly with a strong key file to store it to GAC, we use should include following line after all the imports and before defing namespace. For VB.NET:  <Assembly: AssemblyKeyFile("c:\path\mykey.snk")> For C#:    [assembly: AssemblyKeyFile(@"c:\path\mykey.snk")] but, you might encounter following two errors at the time of creating Assembly for GAC. 1. The type or namespace name 'AssemblyKeyFileAttribute' could not be found (are you missing a using directive or an assembly reference?) 2. The type or namespace name 'AssemblyKeyFile' could not be found (are you missing a using directive or an assembly reference?) How to resolve these errors: Just include "System.Reflection" namespace. It resolve above two errors. span.fullpost {display:none;}

    Read the article

  • PASS: Board Q&amp;A at the Summit

    - by Bill Graziano
    The last two years we’ve put the Board in front of the members and taken questions.  We’re going to do that again this year.  It will be in Room 307/308 from 12:15 to 1:30 on Friday. Yes, this time overlaps with the Birds of a Feather Lunch and the start of afternoon sessions – but only partially.  You can attend the Q&A and still get to parts of both of those.  There just isn’t a great time to do this.  Every time overlaps with something. We can’t do it after the last session on Friday.  We can’t fit it between the last session and the evening events on Wednesday or Thursday.  We had some discussion around breakfast time but I didn’t think that was realistic.  This is the least bad time we could come up with. Last year we had 60-70 people attend.  These are the items that were specific things that I could work on: The first question was whether to increase transparency around individual votes of Board members.  We approved this at the Board meeting the following day.  The only caveat was that if the Board is given confidential information as a basis for their vote then we may not be able to disclose individual votes.  Putting a Director in a position where they can’t publicly defend the reason for their vote is a difficult situation.  Thanks Kendal! Can we have a Board member discretionary fund?  As background, I took a couple of people to lunch so we could have a quiet place to talk.  I bought lunch but wasn’t able to expense it back to PASS.  We just don’t have a budget item for things like this.  I think we should.  I would guess the entire Board would like it also.  It was in an earlier version of the budget but came out as part of a cost-cutting move to balance the budget.  I’d like to see it added back in but we’ll have to see. I know there were a comments about the elections.  At this point we had created the Election Review Committee.  I’ve already written at length about this process. Where does IT work go?  PASS started to publish our internal management reports starting in December 2010.  You can find them on our Governance page.  These aren’t filtered at all and include a variety of information about IT projects.  The most recent update had roughly a page of updates related to IT.  Lots of the work was related to Summit and the Orator tool that we use to manage speaker submissions. There were numerous requests that Tina Turner not be repeated.  Done.  I don’t think we’ll do anything quite like that again.  We had a request for a payment plan for Summit.  We looked into this briefly but didn’t take any action.  We didn’t think the effort was worth the small number of people that would use it.  If you disagree, submit this on our Summit Feedback site and get some votes. There were lots of suggestions around the first-timers events – especially from first timers.  You can find all our current activities related to first-timers at the First Timers page on the Summit web site.  Plus links to 34 (!) blog posts on suggestions for first-timers.  And a big THANK YOU to Confio and Red Gate for sponsoring this. I hope you get the chance to attend.  These events are very helpful to me as a Board member.  I like being able to look around the room as comments are being made and see the audience reaction.  It helps me gauge the interest in an idea. I’d also like to direct you to the Summit Feedback site.  You can submit and vote on ideas to make the Summit a better experience.  As of right now we have the suggestions from last year still up.  We may reset these prior to the Summit though.

    Read the article

  • Max Degree of Parallelism Server-Side Setting

    - by Tara Kizer
    Recently I opened a case with Microsoft PSS to help us through a severe performance problem on a new system.  As part of that case, the PSS engineer checked our “max degree of parallelism” server-side setting.  It is our standard to use 4 on our production systems that have 16 CPUs (2 sockets, quad-core, hyper-threaded).  The PSS engineer had me run the below query to get Microsoft’s recommended value of “max degree of parallelism” server-side setting for our 16-CPU system: select case when cpu_count / hyperthread_ratio > 8 then 8 else cpu_count / hyperthread_ratio end as optimal_maxdop_setting from sys.dm_os_sys_info; The query returned 2.  I made the change using sp_configure, and it did not resolve our issue.  We have decided to leave it in place for now.   Do you agree with this query?  What are your thoughts on this? If you decide to change your setting to reflect the output of this query, please test it first to ensure there are no negative side effects.

    Read the article

  • How to determine if you should use full or differential backup?

    - by Peter Larsson
    Or ask yourself, "How much of the database has changed since last backup?". Here is a simple script that will tell you how much (in percent) have changed in the database since last backup. -- Prepare staging table for all DBCC outputs DECLARE @Sample TABLE         (             Col1 VARCHAR(MAX) NOT NULL,             Col2 VARCHAR(MAX) NOT NULL,             Col3 VARCHAR(MAX) NOT NULL,             Col4 VARCHAR(MAX) NOT NULL,             Col5 VARCHAR(MAX)         )   -- Some intermediate variables for controlling loop DECLARE @FileNum BIGINT = 1,         @PageNum BIGINT = 6,         @SQL VARCHAR(100),         @Error INT,         @DatabaseName SYSNAME = 'Yoda'   -- Loop all files to the very end WHILE 1 = 1     BEGIN         BEGIN TRY             -- Build the SQL string to execute             SET     @SQL = 'DBCC PAGE(' + QUOTENAME(@DatabaseName) + ', ' + CAST(@FileNum AS VARCHAR(50)) + ', '                             + CAST(@PageNum AS VARCHAR(50)) + ', 3) WITH TABLERESULTS'               -- Insert the DBCC output in the staging table             INSERT  @Sample                     (                         Col1,                         Col2,                         Col3,                         Col4                     )             EXEC    (@SQL)               -- DCM pages exists at an interval             SET    @PageNum += 511232         END TRY           BEGIN CATCH             -- If error and first DCM page does not exist, all files are read             IF @PageNum = 6                 BREAK             ELSE                 -- If no more DCM, increase filenum and start over                 SELECT  @FileNum += 1,                         @PageNum = 6         END CATCH     END   -- Delete all records not related to diff information DELETE FROM    @Sample WHERE   Col1 NOT LIKE 'DIFF%'   -- Split the range UPDATE  @Sample SET     Col5 = PARSENAME(REPLACE(Col3, ' - ', '.'), 1),         Col3 = PARSENAME(REPLACE(Col3, ' - ', '.'), 2)   -- Remove last paranthesis UPDATE  @Sample SET     Col3 = RTRIM(REPLACE(Col3, ')', '')),         Col5 = RTRIM(REPLACE(Col5, ')', ''))   -- Remove initial information about filenum UPDATE  @Sample SET     Col3 = SUBSTRING(Col3, CHARINDEX(':', Col3) + 1, 8000),         Col5 = SUBSTRING(Col5, CHARINDEX(':', Col5) + 1, 8000)   -- Prepare data outtake ;WITH cteSource(Changed, [PageCount]) AS (     SELECT      Changed,                 SUM(COALESCE(ToPage, FromPage) - FromPage + 1) AS [PageCount]     FROM        (                     SELECT CAST(Col3 AS INT) AS FromPage,                             CAST(NULLIF(Col5, '') AS INT) AS ToPage,                             LTRIM(Col4) AS Changed                     FROM    @Sample                 ) AS d     GROUP BY    Changed     WITH ROLLUP ) -- Present the final result SELECT  COALESCE(Changed, 'TOTAL PAGES') AS Changed,         [PageCount],         100.E * [PageCount] / SUM(CASE WHEN Changed IS NULL THEN 0 ELSE [PageCount] END) OVER () AS Percentage FROM    cteSource

    Read the article

  • Secret of SQL Trace Duration Column

    - by Dan Guzman
    Why would a trace of long-running queries not show all queries that exceeded the specified duration filter?  We have a server-side SQL Trace that includes RPC:Completed and SQL:BatchCompleted events with a filter on Duration >= 100000.  Nearly all of the queries on this busy OLTP server run in under this 100 millisecond threshold so any that appear in the trace are candidates for root cause analysis and/or performance tuning opportunities. After an application experienced query timeouts, the DBA looked at the trace data to corroborate the problem.  Surprisingly, he found no long-running queries in the trace from the application that experienced the timeouts even though the application’s error log clearly showed detail of the problem (query text, duration, start time, etc.).  The trace did show, however, that there were hundreds of other long-running queries from different applications during the problem timeframe.  We later determined those queries were blocked by a large UPDATE query against a critical table that was inadvertently run during this busy period. So why didn’t the trace include all of the long-running queries?  The reason is because the SQL Trace event duration doesn’t include the time a request was queued while awaiting a worker thread.  Remember that the server was under considerable stress at the time due to the severe blocking episode.  Most of the worker threads were in use by blocked queries and new requests were queued awaiting a worker to free up (a DMV query on the DAC connection will show this queuing: “SELECT scheduler_id, work_queue_count FROM sys.dm_os_schedulers;”).  Technically, those queued requests had not started.  As worker threads became available, queries were dequeued and completed quickly.  These weren’t included in the trace because the duration was under the 100ms duration filter.  The duration reflected the time it took to actually run the query but didn’t include the time queued waiting for a worker thread. The important point here is that duration is not end-to-end response time.  Duration of RPC:Completed and SQL:BatchCompleted events doesn’t include time before a worker thread is assigned nor does it include the time required to return the last result buffer to the client.  In other words, duration only includes time after the worker thread is assigned until the last buffer is filled.  But be aware that duration does include the time need to return intermediate result set buffers back to the client, which is a factor when large query results are returned.  Clients that are slow in consuming results sets can increase the duration value reported by the trace “completed” events.

    Read the article

  • Repository and Ticket management in a Windows Environment

    - by saifkhan
    I’ve been using AxoSoft’s bug tracking application for a while, although and excellent piece of software I had some issues with it ·         It was SLOOOW (both desktop and web). I don’t care what Axosoft says, I tired multiple servers etc. I’ve been long enough in this field to tell you when something is not right with an app. ·         The cost! It’s not feasible for a small team.   I must say though, that they have some nice features which are not commonly found on other bug tracking software. I wouldn’t go on to list any here. I would prefer you download and try their app and see for yourself. In my quest to find a replacement, I tried a few. The successor had to satisfy the following ·         A 99.99% Windows Environment. ·         Bug Tracking. ·         Ticket Management (power users and project managers can open tickets on projects). ·         Repository (I decided to merge bug tracking and repository to get my team to be more productive). ·         Unlimited users. ·         Cost. Being the head of IT security for the firm I work for, making the decision to move data offsite was a hard decision to make, but turned out to be one I am not regretting so far. My choice was down to Altassian JIRA and codebaseHQ. I ended up going with the latter… (I still love the greenhopper from Altassian…its freaking cool!) CodebaseHQ is nice and simple and has all the features I needed. I’ve been using them for a few months now and very happy. Their pricing…well, see for yourself. I was also able to get our SVN data… (Yes, SVN! I don’t go near the Visual Sourcesafe thing…it’s not that safe (pardon the pun). I am hearing some nice things about TFS 2010) over to codebaseHQ. We use VisualSVN to access repositories. …so if you are a Windows developer (or team) codebaseHQ is worth checking out!

    Read the article

  • Missing Indexes DMV Report, 3 billion Impact!

    - by Tara Kizer
    We’ve been having some major performance issues with one of the applications that I support.  The database is on SQL Server 2005 and is about 150GB in size.  We’ve identified a couple of issues already on the database side.  The first issue is that some query (or maybe several queries) is getting a bad execution plan at some point in time during the day.  When it occurs, database performance comes to a grinding halt.  We know it’s a bad execution plan as running DBCC FREEPROCCACHE immediately resolves the problem system-wide.  As we have not yet identified the problematic query, we’ve put a temporary solution in place that frees the procedure cache on an hourly basis via a SQL Agent job.  This is not ideal, but it is getting us through the day without a major problem.  We are actively working on identifying the problematic query and hope to disable the SQL Agent job soon. Earlier this week, we had a major slowdown for one of the processes of this application.  I was unable to find any database performance issues, but I continued to investigate it.  One of things that I typically do when investigating database performance issues is run the “Missing Indexes DMV Report” (that’s what I call it at least).  When analyzing the output of that report, I immediately dismiss anything under 1 million “Impact” as I want to target the “low-hanging fruit” initially.  When I ran the report earlier this week, I was shocked to find a suggested index with an impact of over 3 billion! Do I win a prize for the highest impact?  Has anyone seen a value higher than mine?  My exact value was 3154284120.67765. The performance issue from earlier this week ended up being an application problem, but it also brought to light a much needed index.  I had previously seen this index come up in that report but always with a much lower impact.  I had never considered it as the index’s selectivity is very low.  It’s a composite index with three columns.  The first column is not selective, the first two columns are not selective, and the three columns together are not selective.  In fact, no matter how I order it, the index will not be selective at all.  I briefly discussed this with Kimberly Tripp, and she said that this was okay for covering indexes.  Selectivity is irrelevant for a covering index.  She indicated that she’s even created indexes with gender as the first column in the index.  I’ve got lots to learn still!

    Read the article

  • AdventureWorks 2014 Sample Databases Are Now Available

    - by aspiringgeek
      Where in the World is AdventureWorks? Recently, SQL Community feedback from twitter prompted me to look in vain for SQL Server 2014 versions of the AdventureWorks sample databases we’ve all grown to know & love. I searched Codeplex, then used the bing & even the google in an effort to locate them, yet all I could find were samples on different sites highlighting specific technologies, an incomplete collection inconsistent with the experience we users had learned to expect.  I began pinging internally & learned that an update to AdventureWorks wasn’t even on the road map.  Fortunately, SQL Marketing manager Luis Daniel Soto Maldonado (t) lent a sympathetic ear & got the update ball rolling; his direct report Darmodi Komo recently announced the release of the shiny new sample databases for OLTP, DW, Tabular, and Multidimensional models to supplement the extant In-Memory OLTP sample DB.  What Success Looks Like In my correspondence with the team, here’s how I defined success: 1. Sample AdventureWorks DBs hosted on Codeplex showcasing SQL Server 2014’s latest-&-greatest features, including:  In-Memory OLTP (aka Hekaton) Clustered Columnstore Online Operations Resource Governor IO 2. Where it makes sense to do so, consolidate the DBs (e.g., showcasing Columnstore likely involves a separate DW DB) 3. Documentation to support experimenting with these features As Microsoft Senior SDE Bonnie Feinberg (b) stated, “I think it would be great to see an AdventureWorks for SQL 2014.  It would be super helpful for third-party book authors and trainers.  It also provides a common way to share examples in blog posts and forum discussions, for example.”  Exactly.  We’ve established a rich & robust tradition of sample databases on Codeplex.  This is what our community & our customers expect.  The prompt response achieves what we all aim to do, i.e., manifests the Service Design Engineering mantra of “delighting the customer”.  Kudos to Luis’s team in SQL Server Marketing & Kevin Liu’s team in SQL Server Engineering for doing so. Download AdventureWorks 2014 Download your copies of SQL Server 2014 AdventureWorks sample databases here.

    Read the article

  • 3D Display Issue When Using Latest Java Runtime Versions - Patch now available...

    - by [email protected]
    Typically I focus my blog posts on Support process topics, and reserve most of the technical topics for the Support newsletter. This topic, however, warrants a quick mention in the blog since I know it's been affecting many users recently. For customers using the Client/Server Deployment of AutoVue, users that had upgraded their client Java Runtime Environment (JRE) to version 1.6.0_19 or later suddenly noticed that their 3D files were opening blank in AutoVue. This issue was due to a change in JRE version 1.6.0_19, and the AutoVue team now offers a patch to address the issue in AutoVue version 20.0.0. The patch number is 10268316, is available through the My Oracle Support portal, and is described further in KM Note 1104821.1. We'll mention it again in our next Support newsletter, and the AutoVue team will target to roll the same fix into the next available release of the product.

    Read the article

  • Blog...Dead & New Address

    - by Derek Comingore
    Hi Folks! For a while I was attempting (probably the correct word) to keep both this blog as well as my SQL Server Magazine SQL Server BI blog current. Working on growing B.I. Voyage , speaking, writing, and blogging is quite the work load. And so this blog suffered as a result. THANK YOU all who read my blog here on SQLTeam.com, I intend to leave it live for those who might benefit from its content at a later date. My consolidated, single blog can be found at http://www.sqlmag.com/blogs/sqlserverbi.aspx . Cheers, Derek

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Internships available in Oracle Netherlands - this summer

    - by jessica.ebbelaar
    I am Jannie Minnema, Director of Business Operations for Oracle in the Benelux. My career at Oracle started at Oracle Headquartes in San Francisco as a Project Manager, building Computer Based Training Products. After spending 3 years in Dubai, my husband and I moved to the USA as he wanted to study a MBA there. This move kick started my career as I was working in Silicon Valley during a time of great opportunity. After the USA, I fulfilled numerous roles at Oracle ranging from Project Management to Sales and Marketing. I currently work in the Netherlands were I am now Director of Business Operations for Oracle in the Benelux and a member of the Dutch Management Team. Business Operations advises the Benelux Management Team and focuses on topics such as Corporate Social Responsibility, Customer Satisfaction, Internal Communication, Internal training and effective usage of Sales Tools and Systems. We are currently also working on how best to introduce a “New way of Working”. The move to our new office building in 2011 aides in creating the right environment for this. Our goal is to continually improve the organisation. I enjoy working for Oracle because there is never a dull moment, and I am continuously challenged to improve. The environment that I work in changes constantly. Look at all the recent acquisitions; over 60 in the past 3 years! If you, as an Oracle employee, see something that can be done better, like a new service or tool, then combine it with some enthusiasm, motivate it further and the (Oracle) world changes! Internships This summer we have a number of Internships available, coordinated by the Business Operations team. We very much look forward to welcoming Students in our Dutch office. We look at it as an opportunity for both Oracle and the Interns to learn from each other. It will definitely result in both parties improving, growing and achieving results! We offer Internships related to Sales, Marketing and New Technology. You can find the assignments here. During the Internship you will experience what is like to work for an international and dynamic company, where we work and play hard. Our customers are major Dutch companies and our employees are professionals that compare working at Oracle with playing a Soccer World Cup final. We offer several Internships at the same time, so you will learn and share your experiences with a group of fellow students. If you have any questions related to this article feel free to contact [email protected].  You can find our job opportunities via http://campus.oracle.com

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • From DBA to Data Analyst

    - by Denise McInerney
    Cross posted from the PASS Blog There is a lot changing in the data professional’s world these days. More data is being produced and stored. More enterprises are trying to use that data to improve their products and services and understand their customers better. More data platforms and tools seem to be crowding the market. For a traditional DBA this can be a confusing and perhaps unsettling time. It’s also a time that offers great opportunity for career growth. I speak from personal experience. We sometimes refer to the “accidental DBA”, the person who finds herself suddenly responsible for managing the database because she has some other technical skills. While it was not accidental, six months ago I was unexpectedly offered a chance to transition out of my DBA role and become a data analyst. I have since come to view this offer as a gift, though at the time I wasn’t quite sure what to do with it. Throughout my DBA career I’ve gotten support from my PASS friends and colleagues and they were the first ones I turned to for counsel about this new situation. Everyone was encouraging and I received two pieces of valuable advice: first, leverage what I already know about data and second, work to understand the business’ needs. Bringing the power of data to bear to solve business problems is really the heart of the job. The challenge is figuring out how to do that. PASS had been the source of much of my technical training as a DBA, so I naturally started there to begin my Business Intelligence education. Once again the Virtual Chapter webinars, local chapter meetings and SQL Saturdays have been invaluable. I work in a large company where we are fortunate to have some very talented data scientists and analysts. These colleagues have been generous with their time and advice. I also took a statistics class through Coursera where I got a refresher in statistics and an introduction to the R programming language. And that’s not the end of the free resources available to someone wanting to acquire new skills. There are many knowledgeable Business Intelligence and Analytics professionals who teach through their blogs. Every day I can learn something new from one of these experts. Sometimes we plan our next career move and sometimes it just happens. Either way a database professional who follows industry developments and acquires new skills will be better prepared when change comes. Take the opportunity to learn something about the changing data landscape and attend a Business Intelligence, Business Analytics or Big Data Virtual Chapter meeting. And if you are moving into this new world of data consider attending the PASS Business Analytics Conference in April where you can meet and learn from those who are already on that road. It’s been said that “the only thing constant is change.” That’s never been more true for the data professional than it is today. But if you are someone who loves data and grasps its potential you are in the right place at the right time.

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • Jagran Prakashan Increases Staff Productivity by 40%

    - by Michael Snow
    Jagran Prakashan Increases Staff Productivity by 40%, Launches New IT Projects up to 4x Faster, Enables Mobile Service, and Improves Business Agility Oracle Customer: JPL Location:  Uttar Pradesh, India Industry: Media and Entertainment Employees:  10,000 Annual Revenue:  $100 to $500 Million Jagran Prakashan Ltd. (JPL) is one of India's premier media and communications groups with interests spanning print, advertising, event management, and mobile services for weather, cricket scores, and educational activities. It is a major media enterprise, with 300 locations across 15 states. Its impressive stable of print publications includes Dainik Jagran, the world’s most widely read daily newspaper––with a readership of over 55 million––the country’s leading afternoon dailies, and a range of popular local, bilingual, and English language newspapers. JPL was using multiple systems to manage its business processes. Users were resistant to using multiple passwords for various applications, preferring to continue their less efficient, legacy work practices. In addition, there was no single repository for sharing documents across the organization, such as company announcements or project documents. The company relied on e-mail to disseminate up-to-date company information, often missing employees. It was also time-consuming and difficult for managers to track the status of ongoing assignments or projects because collaboration and document sharing was inefficient and ineffective.With diverse businesses and many geographic locations, JPL needed to implement a centralized and user-friendly enterprise portal to improve document sharing and collaboration and increase business agility. The company implemented Oracle WebCenter Portal to create a dynamic, secure, and intuitive self-service enterprise portal to improve the user experience and increase operating efficiency. It improved staff productivity by 40%, accelerated new IT projects by up to 4x, boosted staff morale, and increased business agility.   Increases Staff Productivity by 40%, Launches New Products up to 2x Faster A word from JPL "With Oracle WebCenter Portal, we gained a dynamic, secure, and intuitive self-service enterprise portal that provided an exceptional user experience and enabled us to engage employees in a collaborative environment. It increased IT staff productivity by 40%, delivered new projects up to 4x faster, and enabled mobile service to improve our business agility.” Sarbani Bhatia, Vice President IT, Jagran Prakashahn Ltd Before implementing Oracle WebCenter Portal, JPL stored project-critical information, such as page planning of daily newspaper editions and the launch of new editions or supplements on individual laptops or in the e-mail system. Collaboration between colleagues was limited to physical meetings, telephone discussions, and e-mail. It was difficult to trace and recover important project documents when a staff member resigned, which represented a significant risk to business continuity. Employees were also averse to multiple passwords and resisted using the systems, affecting staff productivity. With Oracle WebCenter Portal, JPL created a dynamic, secure, and intuitive self-service enterprise portal with business activity streams. The portal allowed users to navigate, discover, and access information, such as advertising rates, requisition approvals, ad-hoc queries, and employee surveys from a single entry point with a single password. Managers can also upload important documents, such as new pricing for advertisers or newspaper distributors, and share them through the information and instruction section in the portal. In addition, managers can now easily track and review timelines for projects online rather than gathering information from meetings and e-mails. The company gained the ability to centrally manage information, ensured business continuity, and improved staff productivity by 40%.“In the media industry, news has a very short shelf life, so speed is crucial. Information delayed is like information lost,” said Sarbani Bhatia, vice president IT, Jagran Prakashahn Ltd. “Thanks to Oracle WebCenter Portal’s contextual collaboration tools, we can provide and share feedback for new project launches, such as career or education supplements, up to 2x faster through discussion forums or knowledge groups. Tasks that previously required four months, we now complete in one month.”In addition, the company can broadcast announcements, flash employee birthdays, and promote important events through the message section on the webpage, instead of using the e-mail system. The company can also conduct opinion polls to gauge employee response to organizational issues and improve management decision-making.“With over 10,000 employees across 300 locations, it is critical for management to hear the voice of employees and develop a cohesive organizational culture. Oracle WebCenter Portal enables employees to engage with business processes and systems in a collaborative environment, providing users with an exceptional experience,” Bhatia said. Enables Mobility Access and Increases Business Agility Newspaper advertisements generate the majority of JPL’s revenue. With most sales staff on the move, the company needed to ensure timely approval of print advertisement discounts for specific clients and meet tight publication deadlines.  By integrating Oracle WebCenter Portal seamlessly with its enterprise resource planning (ERP) system and other applications, such as the organizational mass mailing system, business intelligence, and management information system, JPL embedded its approval workflow processes into the enterprise portal and provided users with an integrated and intuitive interface. About 30% of JPL’s sales staff members now have tablets and receive advertising discount approval from managers while in the field and no longer need to return to the office, which has significantly improved efficiency and increased business agility.“Application mobility was critical for sales representatives in the field to meet stringent auditing requirements for online accountability, particularly for our newspaper advertising business. Staff member satisfaction has improved significantly now that the sales team can use tablets to access the portal––a capability we will extend to smart phones in the second stage of the implementation,” Bhatia said. Accelerates Application Development by up to 4x and Cuts Costs by up to 60% With Oracle WebCenter Portal, users can easily create, modify, and upload information to their personalized webpages without IT assistance. By seamlessly integrating Oracle WebCenter Portal with the payroll database, managers can decide which members of their team can access the page and with whom they will share information, a decision based on role or geographical location. A sales representative selling advertising space for a local language daily newspaper, for example, can upload an updated advertising rate relevant only to that particular publication. Users can also easily adapt to the new platform, thanks to its intuitive design and look, reducing the need for training and lowering resistance to using the system.Using Oracle WebCenter Portal’s out-of-the-box reusable components, such as portal pages and templates, provided JPL’s developers with a comprehensive and flexible user experience platform and increased the speed of application development. In less than five months, JPL developed more than 55 workflows. The IT team accelerated deployment of new applications by up to 4x, as they do not need to install them on individual machines now that they have a web-based environment.   “Previously, we would have spent a whole day deploying a new application for each department or location. With a browser-based environment, we have cut costs by up to 60% by reducing deployment time to zero, because our IT team can roll out a new application from a single point, thanks to Oracle WebCenter Portal,” Bhatia said. Challenges Provide a dynamic, secure, and intuitive self-service enterprise portal to improve staff productivity and ensure business continuity Enable seamless integration with multiple enterprise applications to improve workflow efficiency—including approval of print advertisement discounts—and increase business agility Improve engagement with employees and enable collaboration to enhance management decision-making Accelerate time-to-market for new services, such as new advertising programs Solutions Oracle Product and ServicesOracle WebCenter Portal 11g Increased staff productivity by 40% and enhanced user satisfaction by enabling employees to easily navigate, discover, and access information from a single, self-service enterprise portal without IT assistance Launched new products, such as career or education supplements, up to 2x faster by enabling peer collaboration and incorporating feedback generated through discussion forums, thanks to Oracle WebCenter Portal’s out-of-the-box collaboration tools Accelerated application development up to 4x by enabling developers to optimize reusable components for managing and deploying new applications in a browser-based environment rather than spending one day to install applications for each department, cutting costs by up to 60% Ensured business continuity by enabling managers to easily track and review project timelines online rather than storing important documents on individual laptops or relying on the e-mail system Increased business agility and operational efficiency by seamlessly integrating with the in-house, ERP system and embedding business processes into a single portal Boosted company revenue by enabling sales team members to submit print-advertising discount requests through mobile devices instead of waiting to return to office, ensuring timely approval from managers to meet tight publication deadlines Improved management decision-making by enabling employees to easily share and access feedback through opinion polls or forums, boosting staff morale Introduced the single sign-on capability and enhanced security by enabling managers to decide access level for staff members based on role or geographical location Reduced the need for staff training and minimized user resistance to systems by providing a dynamic and intuitive user experience Why Oracle JPL did not consider other products because the company was already using Oracle Database, Enterprise Edition with Real Application Clusters and had a positive experience with Oracle. JPL chose Oracle WebCenter Portal to ensure no compatibility issues for integration with its existing Oracle products and to take advantage of the experience and support of a reputable vendor to ensure business continuity. “We chose Oracle because we knew we could rely on its support and experience. In addition, Oracle WebCenter Portal’s speed, agility, and mobile access features were a perfect fit for our business requirements,” Bhatia said. Implementation Process JPL launched the enterprise portal to 500 users in the first phase of the project, and plans to extend this to 2,000 users when the portal is fully launched. Oracle partner PricewaterhouseCoopers used Oracle Application Development Framework for the intial set-up, user training and to develop and design sample workflows. JPL’s internal IT staff then took charge of the implementation, bringing it to completion on budget. Partner Oracle PartnerPricewaterhouseCoopers (India)

    Read the article

  • How to Hibernate from .NET Apps and How to enable Hibernate in Windows XP

    The usage of Computer desktop or laptop is increased all around the world phenomenally. This link gives you the picture on how power consumption is for various devices we use daily. to reduce the power consumption Hibernate is one of the best way provided by default in Windows Vista or Windows 7. Hibernate feature enables you to close the machine without closing your applications, that means the applications will be restored as they were once we restart the machine. Hibernate feature is not enabled in Windows XP by default. I’ve seen many people that they run (do not switch off) the machines months and months as they do not want to close the windows or applications running in Windows XP. below are the steps to enable Hibernate in Windows XP. Right click on Desktop. Click on properties. Go to screen save tab. Click on power button Select Hibernate tab Check the checkbox “Enabled Hibernate” Apply the settings. Now when you try to shutdown, “Shut down windows” dialog shows “Hibernate” options. Now you can safely close the machine without closing your applications or windows as they will be restored once you on the machine. </SPAN? Some time you might want to provide this future programmatically for the applications you develop for windows. Generally you might want to provide this option in windows applications where process needs huge time. Download managers are the one of the best example. below is the code to do a Hibernate from the .NET code. using System.Windows.Forms;namespace CodeKicks.WinApp.Machine{ public static class MyMachineHelper { public static void DoHibernate() { //Application.SetSuspendState(PowerState.Suspend, true, false); Application.SetSuspendState(PowerState.Hibernate, true, false); } }} span.fullpost {display:none;}

    Read the article

  • SQL Saturday Atlanta: Intro To Performance Tuning

    - by Mike Femenella
    I'm looking forward to speaking in Atlanta on the 24th, will be fun to get back down that way to visit with some friends and present two topics that I really enjoy. First, an introduction to performance tuning. Performance tuning is a very wide and deep topic and we're staying close to the surface. I direct this class for newbie sql users who have less than 2 years of experience. It's all the things I wish someone would have told me in my first 2 years about what to look for when the database was slow...or allegedly slow I should say. We'll cover using profiler to find slow performing queries and how to save the data off to a table as well as a tour of other features. The difference between clustered, non clustered and covering indexes. How to look at and understand an execution plan (at a high level) and finally the difference between a temp table and a table variable and what the implications are of using either one in your code. That pretty much takes up a full hour. Second presentation, Loading Data in Real Time. It's really a presentation about partitioning but with a twist that we used at work recently to solve a need to load some data quickly and put it into production with minimal downtime. We'll cover partition functions, schemes,$partition, merge, sys.partitions and show some examples of building a set of partitioned tables and using the switch statement to move it from one table to another. Finally we'll cover the differences in partitioning between 2005 and 2008. Hope to see you there! And if you read my blog please introduce yourself!

    Read the article

  • How to import in BIDS more than one SSIS package in one shot!

    - by Luca Zavarella
    Have you ever wanted to add more than one Integration Services existing package (e.g. 20 packages) in a SSIS project? Well, you may suppose that an Open Dialog supports multiple files selection to import more than one file at a time ... BIDS Open Dialog doesn’t allow this, you can just select a single file! Hence the loss of valuable time spent to import the packages one at a time. Few days ago I learned a trick that solves the problem, thanks to this post by Matt Masson. Just copy all the packages to import from Windows Explorer (Ctrl + C): Then just right click on the SSIS Packages folder of the Integration Services project and make a simple Past (CTRL + V): So “auto-magically” you’ll have all those packages imported in your Integration Services project!! What can I say... this feature was well hidden!

    Read the article

  • Fixing SSMS Tabs

    - by Tara Kizer
    It never occurred to me that the way SSMS handles tabs could be changed, and it’s just that the default settings suck.  In this blog post, Brent Ozar shows us how to fix SSMS so that the tabs are actually usable and not annoying anymore. I can’t love his post enough.  It has really helped me become more efficient.  I’m always flipping between tabs and can’t quickly find the one I need at some critical time, but now I can easily find it!

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >