Search Results

Search found 11015 results on 441 pages for 'explain plan'.

Page 42/441 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Hosting company that does Linux VPS and MS SQL

    - by danielmcq
    I'm looking for hosted solutions but there are so many companies that finding the right one using a Google search is a bit overwhelming. Ideally I would like a hosting company which has following options: -Linux VPSs - Individual VPSs should be fairly cheap since I plan on putting one or two services per VPS i.e web server on one (httpd and ColdFusion), an SVN server on another, etc. -Managed MS SQL databases - My company already has data in MS SQL databases and a lot of ColdFusion code written that has MS SQL specific commands in it. -Individually purchased dedicated IP addresses -Preferably located in the North America region My plan would be to setup one Linux VPS as a gateway/firewall/VPN server and have all of my traffic routed through so that my other servers would not use of bandwidth by talking to each other. The trick is also finding a company that does Linux VPS AND MS SQL databases. Does anybody know of any hosting companies offer what I'm looking for? Let me know if I need to add more details.

    Read the article

  • How to free static member variable in C++?

    - by user299831
    Can anybody explain how to free memory of a static member Variable? In my understanding it can only be freed if all the instances of the class are destroyed. I am a little bit helpless at this point... Some Code to explain it: class ball { private: static SDL_Surface *ball_image; }; //FIXME: how to free static Variable? SDL_Surface* ball::ball_image = SDL_LoadBMP("ball.bmp");

    Read the article

  • Will readyboost speed up a secondary partion or harddrive?

    - by Sebastian
    I'm building a new computer and plan on using a few spare USB sticks with readyboost to cache disk writes. I'll be running Windows 7 Enterprise SP1 x64 I have a single 2 TB disk, and plan to make a partition for windows (100 GB) and use the rest for data. I know Readyboost will work nicely for the C drive, but am unable to find any information on whether it will accelerate a second drive or partition. Just to clarify, I'm NOT trying to use the harddrive instead of the usb sticks. I'm trying to speed up the second partition using the USB sticks So, will readyboost work for a secondary partion or harddrive?

    Read the article

  • binary operator "<"

    - by md004
    Consider this expression as a "selection" control structure on integer "x": 0 < x < 10, with the intention that the structure returns TRUE if "x" is in the range 1..9. Explain why a compiler should not accept this expression. (In particular, what are the issues regarding the binary operator "<"? Explain how a prefix operator could be introduced so the expression can be successfully processed.

    Read the article

  • Best server for mailing application [closed]

    - by Cyber Junkie
    My application is similar to a reminder service that reminds users of events that they scheduled. I'm sending emails to users through a PHP script. I'm not sending one email to multiple recipients. Each recipient receives a different message. I plan to use cron jobs every minute and expect the application to send roughly 200 individual emails in 1 hour (for a small user base that may grow). I don't have hosting experience with this type of application. I plan to start on a shared host and move up in the future to vps or dedicated. Most shared hosts that I looked into allow 50-100 emails per hour with delays between mailings. Please kindly inform me what I should look for in web hosts for this kind of application.

    Read the article

  • Hardware needed for 2000 users? [closed]

    - by Trcx
    I have school assignment that is fairly well defined, requiring us to come up with a plan for an environment serving dynamic web applications to 2000 users, and should be able to scale up to six thousand. I have done plenty of research as far as load balancing, redundancy, UPSs, etc, but am having a hard time figuring out how much hardware is actually needed in the way of physical servers, ram, processing power, etc. The assignment states that the server will have a lot of dynamic code, email, and a database are required, all utilizing the appropriate microsoft service (MS SQL, Exchange, IIS). I already plan on splitting them out on to separate servers, but can't even fathom the hardware requirements of something that large scale. Could someone with experience weight in on this, or point me two some good articles?

    Read the article

  • Alternative to cPanel (For Email, ect)

    - by Dboy1612
    I'm currently setting up a VPS for the first time. Standard that I've ever worked with before on shared hosting was cPanel, but as the majority of my work I plan on doing from now on will be using NodeJS and Python/Flask, I'd like to avoid needing to install Apache/MySQL/PHP. What would be my best bet to help manage a mail server other than cPanel? Or even other specific server settings that may come in handy later? Plan on using Ubuntu if that counts for anything.

    Read the article

  • Dependency Properties

    - by developer
    Hi All, Can anybody explain me what a dependency property is in WPF and what is its use. I know there are a lot of tutorials on google for it, but they teach how to create a dependency property. I am confused as to where I would use it. I mean will I use it in XAML? If anybody could explain me in simple terms, that would be great.

    Read the article

  • Session Regeneration

    - by slier
    Hi guys I got some confusion with session handling in php Ok i know how to regenerate new session id in php by using session_regenerate_id(); The problem is i dont know why i should regenerate new session id I been goggling for some time and up to no avail No one explain why i need to generate new session id Can someone explain why i need to regenerate new session id? Thx in advance

    Read the article

  • Java threads for the beginner

    - by Boba
    I've been trying to explain Java threading to a colleague who has never been exposed to multi-threaded applications, but apparently I'm not a very good teacher. Can anyone recommend a good online or offline resource that can explain threading in a simple, step-by-step manner? I know it's a complex topic, but surely there exists an article, book, or other explanation that can result in an "Aha! I get it, finally!" moment.

    Read the article

  • what exactly is delegate in C#

    - by Learning.net
    Hi All, can somebody explain the use of delegate..I know that it is used to invoke methods at run time...but exactly what does it mean..can somebody explain it with some simple example, which will help a newcomer to understand delegate better

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Limitation of the View 12

    - by pinaldave
    I have previously written on the subject SQL SERVER – The Limitations of the Views – Eleven and more…. This was indeed a very popular series and I had received lots of feedback on that topic. Today we are going to discuss something very interesting as well. During my recent performance tuning seminar in Hyderabad, I presented on the subject of Views. During the seminar, one of the attendees asked a question: We create a table and create a View on the top of it. On the same view, if we create Index, when querying View, will that index be used? The answer is NOT Always! (There is only one specific condition when it will be used. We will write about that later in the next post). Let us see the test case for the same. In our script we will do following: USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView GO Let us check the execution plan for the last SELECT statement. You can see from the execution plan. That even though we are querying View and the View has index, it is not really using that index. In the next post, we will see the significance of this View and where it can be helpful. Meanwhile, I encourage you to read my View series: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQL View, T SQL, Technology

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • Bulletin Board System with tagging, email notification

    - by user678220
    I am looking for nice BBS system, Bulletin Board System, Discussion Board, or nice in-company communication platform. There are lots of people, about 30 people, joining in our project. We would like to share idea among us on that platform. We can post questions and concerns related with the project, and we would like to respond each other. Here is my list of functionality I want: Tagging Thread e.g) Announcement, Finance, Legal, Idea. One thread can have multiple Tags. members can set on/off to receive email when new comments are posted. They can set on/off on each Tag. e.g) one member on to receive email related with "Announcement", but off to receive "Finance". Thread owner can change threads' tag any time. Thread can have several type of post. Thread can be "vote" thread. Everyone can vote their opinion. Thread can be "action plan" thread. In this thread, "who" will "what" remains in the thread. By viewing all "action plan" thread, all action plans needed in the company is visualized.

    Read the article

  • Merging Waterfall and Agile – Getting the Worst of Both Worlds

    - by Nick Harrison
    Many people have seen and appreciate the elegance and practicality of agile methodologies.   Sadly there is still not widespread adoption.   There is still push back from many directions and from many different sources.   Some people don't understand how it is supposed to work. Some people don't believe that it could possibly work. Some people mistakenly believe that it is just code for a lazy project team trying to wiggle out of structure Some people mistakenly believe that it can work only with a very small highly trained team Some people are afraid of the control that they feel they will be losing. I have seen some people try to merge agile and water fall hoping to achieve the best of both worlds.   Unfortunately, the reality is that you end up with the worst of both worlds.   And they both can get pretty bad. Another Sad Reality Some people in an effort to get buy in for following an Agile Methodology have attempted to merge these two practices.   Sometimes this may stem from trying to assuage individual fears that they are not losing relevance.   Sometimes it may be to meet contractual obligations or to fulfill regulatory requirements.   Sometimes may not know better. These two approaches to software development cannot coexist on the same project. Let's review the main tenants of the Agile Manifesto: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Meanwhile the main tenants of the Waterfall Approach could be summarized as: Processes and procedures over individuals Comprehensive documentation proves that the software works Well defined contracts and negotiations protects the customer relationship If the plan is made right, there should be no change  Merging these two approaches will always end badly.

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • La pianificazione finanziaria fra le opere di Peggy Guggenheim

    - by user812481
    Lo scorso 22 giugno nella fantastica cornice del Palazzo Venier dei Leoni a Venezia si è tenuto il CFO Executive meeting & event sul Cash flow planning &Optimization. L’evento iniziato con un networking lunch ha permesso agli ospiti di godere della fantastica vista della terrazza panoramica del palazzo che affaccia su Canal Grande. Durante i lavori, Oracle e Reply Consulting, partner dell’evento, hanno parlato della strategia di corporate finance e del valore della pianificazione economico-finanziaria- patrimoniale integrata. Grazie alla partecipazione di Banca IMI si sono potuti approfondire i temi del Business Plan, Sensitivity Analysis e Covenant Test nelle operazioni di Finanza Strutturata. AITI (Associazione Italiana Tesorieri d’Impresa) ha concluso i lavori dando una visione a 360° della pianificazione finanziaria, spiegando il percorso strategico necessario per i flussi di capitale a sostegno del business. Ecco l’elenco degli interventi: Il valore della pianificazione economico-finanziaria-patrimoniale integrata per il CFO nei processi di corporate governance - Lorenzo Mariani, Partner - Reply Consulting Business Plan, Sensitivity Analysis e Covenant Test nelle operazioni di Finanza Strutturata: applicazioni nelle fasi di concessione del credito e di monitoraggio dei rischi - Gianluca Vittucci, Responsabile Finanza Strutturata Banca dei Territori - Banca IMI Dalla strategia di corporate finance al planning operativo: una visione completa ed integrata del processo di pianificazione economico-finanziario-patrimoniale - Edilio Rossi, EPM Business Development Manager, Italy - Oracle EMEA Pianificazione Finanziaria: percorso strategico per ottimizzare i flussi di capitale allo sviluppo del business Aziendale; processo base nelle relazioni con il sistema bancario - Giovanni Ceci, Consigliere AITI e Temporary Finance Manager - Associazione Italiana Tesorieri d’Impresa Per visualizzare tutte le presentazioni seguici su slideshare.  Per visualizzare tutte le foto della giornata clicca qui.

    Read the article

  • Where is the SQL Azure Development Environment

    - by BuckWoody
    Recently I posted an entry explaining that you can develop in Windows Azure without having to connect to the main service on the Internet, using the Software Development Kit (SDK) which installs two emulators - one for compute and the other for storage. That brought up the question of the same kind of thing for SQL Azure. The short answer is that there isn’t one. While we’ll make the development experience for all versions of SQL Server, including SQL Azure more easy to write against, you can simply treat it as another edition of SQL Server. For instance, many of us use the SQL Server Developer Edition - which in versions up to 2008 is actually the Enterprise Edition - to develop our code. We might write that code against all kinds of environments, from SQL Express through Enterprise Edition. We know which features work on a certain edition, what T-SQL it supports and so on, and develop accordingly. We then test on the actual platform to ensure the code runs as expected. You can simply fold SQL Azure into that same development process. When you’re ready to deploy, if you’re using SQL Server Management Studio 2008 R2 or higher, you can script out the database when you’re done as a SQL Azure script (with change notifications where needed) by selecting the right “Engine Type” on the scripting panel: (Thanks to David Robinson for pointing this out and my co-worker Rick Shahid for the screen-shot - saved me firing up a VM this morning!) Will all this change? Will SSMS, “Data Dude” and other tools change to include SQL Azure? Well, I don’t have a specific roadmap for those tools, but we’re making big investments on Windows Azure and SQL Azure, so I can say that as time goes on, it will get easier. For now, make sure you know what features are and are not included in SQL Azure, and what T-SQL is supported. Here are a couple of references to help: General Guidelines and Limitations: http://msdn.microsoft.com/en-us/library/ee336245.aspx Transact-SQL Supported by SQL Azure: http://msdn.microsoft.com/en-us/library/ee336250.aspx SQL Azure Learning Plan: http://blogs.msdn.com/b/buckwoody/archive/2010/12/13/windows-azure-learning-plan-sql-azure.aspx

    Read the article

  • A Model for Planning Your Oracle BPM 10g Migration by Kris Nelson

    - by JuergenKress
    As the Oracle SOA Suite and BPM Suite 12c products enter beta, many of our clients are starting to discuss migrating from the Oracle 10g or prior platforms. With the BPM Suite 11g, Oracle introduced a major change in architecture with a strong focus on integration with SOA and an entirely new technology stack. In addition, there were fresh new UIs and a renewed business focus with an improved Process Composer and features like Adaptive Case Management. While very beneficial to both technology and the business, the fundamental change in architecture does pose clear migration challenges for clients who have made investments in the 10g platform. Some of the key challenges facing 10g customers include: Managing in-process instance migration and running multiple process engines Migration of User Interfaces and other code within the environment that may not be automated Growing or finding technical staff with both 10g and 12c experience Managing migration projects while continuing to move the business forward and meet day-to-day responsibilities As a former practitioner in a mixed 10g/11g shop, I wrestled with many of these challenges as we tried to plan ahead for the migration. Luckily, there is migration tooling on the way from Oracle and several approaches you can use in planning your migration efforts. In addition, you already have a defined and visible process on the current platform, which will be invaluable as you migrate.  A Migration ModelThis model presents several options across a value and investment spectrum. The goal of the AVIO Migration Model is to kick-start discussions within your company and assist in creating a plan of action to take advantage of the new platform. As with all models, this is a framework for discussion and certain processes or situations may not fit. Please contact us if you have specific questions or want to discuss migrations efforts in your situation. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Kris Nelson,ACM,Adaptive Case Management,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Where is the SQL Azure Development Environment

    - by BuckWoody
    Recently I posted an entry explaining that you can develop in Windows Azure without having to connect to the main service on the Internet, using the Software Development Kit (SDK) which installs two emulators - one for compute and the other for storage. That brought up the question of the same kind of thing for SQL Azure. The short answer is that there isn’t one. While we’ll make the development experience for all versions of SQL Server, including SQL Azure more easy to write against, you can simply treat it as another edition of SQL Server. For instance, many of us use the SQL Server Developer Edition - which in versions up to 2008 is actually the Enterprise Edition - to develop our code. We might write that code against all kinds of environments, from SQL Express through Enterprise Edition. We know which features work on a certain edition, what T-SQL it supports and so on, and develop accordingly. We then test on the actual platform to ensure the code runs as expected. You can simply fold SQL Azure into that same development process. When you’re ready to deploy, if you’re using SQL Server Management Studio 2008 R2 or higher, you can script out the database when you’re done as a SQL Azure script (with change notifications where needed) by selecting the right “Engine Type” on the scripting panel: (Thanks to David Robinson for pointing this out and my co-worker Rick Shahid for the screen-shot - saved me firing up a VM this morning!) Will all this change? Will SSMS, “Data Dude” and other tools change to include SQL Azure? Well, I don’t have a specific roadmap for those tools, but we’re making big investments on Windows Azure and SQL Azure, so I can say that as time goes on, it will get easier. For now, make sure you know what features are and are not included in SQL Azure, and what T-SQL is supported. Here are a couple of references to help: General Guidelines and Limitations: http://msdn.microsoft.com/en-us/library/ee336245.aspx Transact-SQL Supported by SQL Azure: http://msdn.microsoft.com/en-us/library/ee336250.aspx SQL Azure Learning Plan: http://blogs.msdn.com/b/buckwoody/archive/2010/12/13/windows-azure-learning-plan-sql-azure.aspx

    Read the article

  • The newest OPN Competency Center (OPN CC) enhancements are now available

    - by mseika
    The newest OPN Competency Center (OPN CC) enhancements are now available. This release is focused on a new look and simplified navigation, and Resell Competency Tracking functionality. Some of the key features released include: 1. New Look and Feel with Simplified Navigation Users are now one click away from the most valuable resources. Additionally, there are now focused areas which allow users to navigate more effectively. Users can review their individual achievements, create a dedicated Training Plan to broaden their knowledge, or use the new Company Corner to view their company’s achievements. Your view as an Oracle employee has been modified and the Company Corner will provide links to allow access to Partner Workgroups and other links specific to your partner data. 2. Resell Competency Tracker This new functionality has been created to allow partners to track their progress toward becoming Resell Authorized. The Resell Competency Tracker highlights those Knowledge Zones where additional requirements must be achieved prior to Distribution Rights being granted and allows the partners to track their progress. This tracker is available to all users badged to a company ID and also to Internal Oracle employees who have existing access to Partner Workgroups. 3. Enhanced Training Manager Functionality The existing Training Manager has been enhanced to allow partners to create Workgroups that are either focused on the competency requirements for becoming Specialized or the competency requirements needed to apply for resell authorization. Please mark your calendars and plan on joining an internal demonstration of these features and enhancements: Wednesday, September 26 @ 7am & 8pm PT A public Beehive web conference has been scheduled Intercall: 5210981 / 2423

    Read the article

  • Planning for the Recovery

    - by john.orourke(at)oracle.com
    As we plan for 2011, there are many positive signs in the global economy, but also some lingering issues. Planning no longer is about extrapolating past performance and adjusting for growth. It is now about constantly testing the temperature of the water, formulating scenarios, assessing risk and assigning probabilities.  So how does one plan for recovery and improve forecast accuracy in such a volatile environment?  Here are some suggestions from a recent article I wrote, which was published in the December Financial Planning & Analysis (FP&A) newsletter from the AFP (Association of Financial Professionals): Increase the frequency of forecasting Get more line managers involved in the planning and forecasting process Re-consider what's being measured - i.e. key financial and operational metrics Incorporate risk and probability into forecasts Reduce reliance on spreadsheets - leverage packaged EPM applications To learn more about these best practices, check out the FP&A section of the AFP website and register to receive the FP&A newsletter.  AFP recently launched a new topic area focused on the FP&A function and items of interest to this group of finance professionals.  In addition to the FP&A quarterly newsletter, AFP will be publishing articles, running webinars and will have an FP&A track in their annual conference, which is in Boston next November.  Brian Kalish, AFP's Finance Lead, is hoping this initiative creates a valuable networking and information-sharing resource for FP&A professionals. Here's a link to the FP&A page on the AFP web site:  http://www.afponline.org/pub/res/topics/topics_fpa.html If you register on the site you can access and subscribe to the FP&A newsletter and other resources. Best of luck in your planning for 2011 and beyond!   

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >