Search Results

Search found 3874 results on 155 pages for 'differed execution'.

Page 45/155 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Using Lambdas for return values in Rhino.Mocks

    - by PSteele
    In a recent StackOverflow question, someone showed some sample code they’d like to be able to use.  The particular syntax they used isn’t supported by Rhino.Mocks, but it was an interesting idea that I thought could be easily implemented with an extension method. Background When stubbing a method return value, Rhino.Mocks supports the following syntax: dependency.Stub(s => s.GetSomething()).Return(new Order()); The method signature is generic and therefore you get compile-time type checking that the object you’re returning matches the return value defined by the “GetSomething” method. You could also have Rhino.Mocks execute arbitrary code using the “Do” method: dependency.Stub(s => s.GetSomething()).Do((Func<Order>) (() => new Order())); This requires the cast though.  It works, but isn’t as clean as the original poster wanted.  They showed a simple example of something they’d like to see: dependency.Stub(s => s.GetSomething()).Return(() => new Order()); Very clean, simple and no casting required.  While Rhino.Mocks doesn’t support this syntax, it’s easy to add it via an extension method. The Rhino.Mocks “Stub” method returns an IMethodOptions<T>.  We just need to accept a Func<T> and use that as the return value.  At first, this would seem straightforward: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(factory()); return opts; } And this would work and would provide the syntax the user was looking for.  But the problem with this is that you loose the late-bound semantics of a lambda.  The Func<T> is executed immediately and stored as the return value.  At the point you’re setting up your mocks and stubs (the “Arrange” part of “Arrange, Act, Assert”), you may not want the lambda executing – you probably want it delayed until the method is actually executed and Rhino.Mocks plugs in your return value. So let’s make a few small tweaks: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(default(T)); // required for Rhino.Mocks on non-void methods opts.WhenCalled(mi => mi.ReturnValue = factory()); return opts; } As you can see, we still need to set up some kind of return value or Rhino.Mocks will complain as soon as it intercepts a call to our stubbed method.  We use the “WhenCalled” method to set the return value equal to the execution of our lambda.  This gives us the delayed execution we’re looking for and a nice syntax for lambda-based return values in Rhino.Mocks. Technorati Tags: .NET,Rhino.Mocks,Mocking,Extension Methods

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • Lies, damned lies, and statistics Part 2

    - by Maria Colgan
    There was huge interest in our OOW session last year on Managing Optimizer Statistics. It seems statistics and the maintenance of them continues to baffle people. In order to help dispel the mysteries surround statistics management we have created a two part white paper series on Optimizer statistics.  Part one of this series was released in November last years and describes in detail, with worked examples, the different concepts of Optimizer statistics. Today we have published part two of the series, which focuses on the best practices for gathering statistics, and examines specific use cases including, the fears that surround histograms and statistics management of volatile tables like Global Temporary Tables. Here is a quick look at the Introduction and the start of the paper. You can find the full paper here. Happy Reading! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif";} Introduction The Oracle Optimizer examines all of the possible plans for a SQL statement and picks the one with the lowest cost, where cost represents the estimated resource usage for a given plan. In order for the Optimizer to accurately determine the cost for an execution plan it must have information about all of the objects (table and indexes) accessed in the SQL statement as well as information about the system on which the SQL statement will be run. This necessary information is commonly referred to as Optimizer statistics. Understanding and managing Optimizer statistics is key to optimal SQL execution. Knowing when and how to gather statistics in a timely manner is critical to maintaining acceptable performance. This whitepaper is the second of a two part series on Optimizer statistics. The first part of this series, Understanding Optimizer Statistics, focuses on the concepts of statistics and will be referenced several times in this paper as a source of additional information. This paper will discuss in detail, when and how to gather statistics for the most common scenarios seen in an Oracle Database. The topics are · How to gather statistics · When to gather statistics · Improving the efficiency of gathering statistics · When not to gather statistics · Gathering other types of statistics How to gather statistics The preferred method for gathering statistics in Oracle is to use the supplied automatic statistics-gathering job. Automatic statistics gathering job The job collects statistics for all database objects, which are missing statistics or have stale statistics by running an Oracle AutoTask task during a predefined maintenance window. Oracle internally prioritizes the database objects that require statistics, so that those objects, which most need updated statistics, are processed first. The automatic statistics-gathering job uses the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure, which uses the same default parameter values as the other DBMS_STATS.GATHER_*_STATS procedures. The defaults are sufficient in most cases. However, it is occasionally necessary to change the default value of one of the statistics gathering parameters, which can be accomplished by using the DBMS_STATS.SET_*_PREF procedures. Parameter values should be changed at the smallest scope possible, ideally on a per-object bases. You can find the full paper here. Happy Reading! +Maria Colgan

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • First Day of Data Integration Track at Oracle OpenWorld 2012

    - by Irem Radzik
    OpenWorld started full speed for us today with a great set of sessions in the Data Integration track. After the exciting keynote session on Oracle Database 12c in the morning; Brad Adelberg, VP of Development for Data Integration products, presented Oracle’s data integration product strategy. His session highlighted the new requirements for data integration to achieve pervasive and continuous access to trusted data. The new requirements and product focus areas presented in this session are: Provide access to any data at any source On premise or on cloud Enable zero downtime operations and maximum performance Leverage real-time data for accurate business insights And ensure high quality data is used across the enterprise During the session Brad walked over how Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Data Quality, and Oracle Data Service Integrator, deliver on these requirements and how recent product releases build on this strategy. Soon after Brad’s session we heard from a panel of Oracle GoldenGate customers, St. Jude Medical, Equifax, and Bank of America, how they achieved zero downtime operations using Oracle GoldenGate. The panel presented different use cases of GoldenGate, from Active-Active replication to offloading reporting. Especially St. Jude Medical’s implementation, which involves the alert management system for patients that use their pacemakers, reminded me in some cases downtime of mission-critical systems can be a matter of life or death. It is very comforting to hear that GoldenGate delivers highly-reliable continuous availability for life-saving medical systems. In the afternoon, Nick Wagner from the Product Management team and I followed the customer panel with the review of Oracle GoldenGate 11gR2’s New Features.  Many questions we received from audience were about GoldenGate’s new Integrated Capture for Oracle Database and the enhanced Conflict Management features, as well as how GoldenGate compares to Oracle Streams. In addition to giving details on GoldenGate’s unique capability to capture changed data with a direct integration to the Oracle DBMS engine, we reminded the audience that enhancements to Oracle GoldenGate will continue, while Streams will be primarily maintained. Last but not least, Tim Garrod and Ryan Fonnett from Raymond James presented a unified real-time data integration solution using Oracle Data Integrator and GoldenGate for their operational data store (ODS). The ODS supports application services across the enterprise and providing timely data is a critical requirement. In this solution, Oracle GoldenGate does the log-based change data capture for Oracle Data Integrator’s near real-time data integration between heterogeneous systems. As Raymond James’ ODS supports mission-critical services for their advisors, the project team had to set up this integration environment to be highly available. During the session, Ryan and Tim explained how they use ODI to enable automated process execution and “always-on” integration processes. Their presentation included 2 demonstrations that focused on CDC patterns deployed with ODI and the automated multi-instance execution and monitoring. We are very grateful to Tim and Ryan for their very-well prepared presentation at OpenWorld this year. Day 2 (Tuesday) will be also a busy day in our track. In addition to the Fusion Middleware Innovation Awards ceremony at 11:45am at Moscone West 3001, we have the following DI sessions Real-World Operational Reporting Customer Panel 11:45am Moscone West- 3005 Oracle Data Integrator Product Update and Future Strategy 1:15pm Moscone West- 3005 High-volume OLTP with Oracle GoldenGate: Best Practices from Comcast 1:15pm Moscone West- 3005 Everything You need to Know about Monitoring Oracle GoldenGate 5pm Moscone West-3005 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On document.

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • Seizing the Moment with Mobility

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan CapdevilaVice President, Oracle Applications Development

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • SQL SERVER – Last Two Days to Get FREE Book – Joes 2 Pros Certification 70-433

    - by pinaldave
    Earlier this week we announced that we will be giving away FREE SQL Wait Stats book to everybody who will get SQL Server Joes 2 Pros Combo Kit. We had a fantastic response to the contest. We got an overwhelming response to the offer. We knew there would be a great response but we want to honestly say thank you to all of you for making it happen. Rick and I want to make sure that we express our special thanks to all of you who are reading our books. The offer is still on and there are two more days to avail this offer. We want to make sure that everybody who buys our most selling combo kits, we will send our other most popular SQL Wait Stats book. Please read all the details of the offer here. The books are great resources for anyone who wants to learn SQL Server from fundamentals and eventually go on the certification path of 70-433. Exam 70-433 contains following important subject and the book covers the subject of fundamental. If you are taking the exam or not taking the exam – this book is for every SQL Developer to learn the subject from fundamentals.  Create and alter tables. Create and alter views. Create and alter indexes. Create and modify constraints. Implement data types. Implement partitioning solutions. Create and alter stored procedures. Create and alter user-defined functions (UDFs). Create and alter DML triggers. Create and alter DDL triggers. Create and deploy CLR-based objects. Implement error handling. Manage transactions. Query data by using SELECT statements. Modify data by using INSERT, UPDATE, and DELETE statements. Return data by using the OUTPUT clause. Modify data by using MERGE statements. Implement aggregate queries. Combine datasets. INTERSECT, EXCEPT Implement subqueries. Implement CTE (common table expression) queries. Apply ranking functions. Control execution plans. Manage international considerations. Integrate Database Mail. Implement full-text search. Implement scripts by using Windows PowerShell and SQL Server Management Objects (SMOs). Implement Service Broker solutions. Track data changes. Data capture Retrieve relational data as XML. Transform XML data into relational data. Manage XML data. Capture execution plans. Collect output from the Database Engine Tuning Advisor. Collect information from system metadata. Availability of Book USA - Amazon | India - Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study

    - by Ruud
    This white paper demonstrates the architected latency hiding features of Oracle’s UltraSPARC T2+ and SPARC T4 processors That is the first sentence from this technical white paper, but what does it exactly mean? Let's consider a very simple example, the computation of a = b + c. This boils down to the following (pseudo-assembler) instructions that need to be executed: load @b, r1 load @c, r2 add r1,r2,r3 store r3, @a The first two instructions load variables b and c from an address in memory (here symbolized by @b and @c respectively). These values go into registers r1 and r2. The third instruction adds the values in r1 and r2. The result goes into register r3. The fourth instruction stores the contents of r3 into the memory address symbolized by @a. If we're lucky, both b and c are in a nearby cache and the load instructions only take a few processor cycles to execute. That is the good case, but what if b or c, or both, have to come from very far away? Perhaps both of them are in the main memory and then it easily takes hundreds of cycles for the values to arrive in the registers. Meanwhile the processor is doing nothing and simply waits for the data to arrive. Actually, it does something. It burns cycles while waiting. That is a waste of time and energy. Why not use these cycles to execute instructions from another application or thread in case of a parallel program? That is exactly what latency hiding on the SPARC T-Series processors does. It is a hardware feature totally transparent to the user and application. As soon as there is a delay in the execution, the hardware uses these otherwise idle cycles to execute instructions from another process. As a result, the throughput capacity of the system improves because idle cycles are no longer wasted and therefore more jobs can be run per unit of time. This feature has been in the SPARC T-series from the beginning, so why this paper? The difference with previous publications on this topic is in the amount of detail given. How this all works under the hood is fully explained using two example programs. Starting from the assembly language instructions, it is demonstrated in what way these programs execute. To really see what is happening we go down to the processor pipeline level, where the gaps in the execution are, and show in what way these idle cycles are filled by other copies of the same program running simultaneously. Both the SPARC T4 as well as the older UltraSPARC T2+ processor are covered. You may wonder why the UltraSPARC T2+ is included. The focus of this work is on the SPARC T4 processor, but to explain the basic concept of latency hiding at this very low level, we start with the UltraSPARC T2+ processor because it is architecturally a much simpler design. From the single issue, in-order pipelines of this processor we then shift gears and cover how this all works on the much more advanced dual issue, out-of-order architecture of the T4. The analysis and performance experiments have been conducted on both processors. The results depend on the processor, but in all cases the theoretical estimates are confirmed by the experiments. If you're interested to read a lot more about this and find out how things really work under the hood, you can download a copy of the paper here. A paper like this could not have been produced without the help of several other people. I want to thank the co-author of this paper, Jared Smolens, for his very valuable contributions and our highly inspiring discussions. I'm also indebted to Thomas Nau (Ulm University, Germany), Shane Sigler and Mark Woodyard (both at Oracle) for their feedback on earlier versions of this paper. Karen Perkins (Perkins Technical Writing and Editing) and Rick Ramsey at Oracle were very helpful in providing editorial and publishing assistance.

    Read the article

  • SSIS Debugging Tip: Using Data Viewers

    - by Jim Giercyk
    When you have an SSIS package error, it is often very helpful to see the data records that are causing the problem.  After all, if your input has 50,000 records and 1 of them has corrupt data, it can be a chore.  Your execution results will tell you which column contains the bad data, but not which record…..enter the Data Viewer. In this scenario I have created a truncation error.  The input length of [lastname] is 50, but the output table has a length of 15.  When it runs, at least one of the records causes the package to fail.     Now what?  We can tell from our execution results that there is a problem with [lastname], but we have no idea WHICH record?     Let’s identify the row that is actually causing the problem.  First, we grab the oft’ forgotten Row Count shape from our toolbar and connect it to the error output from our input query.  Remember that in order to intercept errors with the error output, you must redirect them.     The Row Count shape requires 1 integer variable.  For our purposes, we will not reference the variable, but it is still required in order for the package to run.  Typically we would use the variable to hold the number of rows in the table and refer back to it later in our process.  We are simply using the Row Count as a “Dead End” for errors.  I called my variable RowCounter.  To create a variable, with no shapes selected, right-click on the background and choose Variable.     Once we have setup the Row Count shape, we can right-click on the red line (error output) from the query, and select Data Viewers.  In the popup, we click the add button and we will see this:     There are other fancier options we can play with, but for now we just want to view the output in a grid.  WE select Grid, then click OK on all of the popup windows to shut them down.  We should now see a grid with a pair of glasses on the error output line.     So, we are ready to catch the error output in a grid and see that is causing the problem!  This time when we run the package, it does not fail because we directed the error to the Row Count.  We also get a popup window showing the error record in a grid.  If there were multiple errors we would see them all.     Indeed, the [lastname] column is longer than 15 characters.  Notice the last column in the grid, [Error Code – Description].  We knew this was a truncation error before we added the grid, but if you have worked with SSIS for any length of time, you know that some errors are much more obscure.  The description column can be very useful under those circumstances! Data viewers can be used any time we want to see the data that is actually in the pipeline;  they stop the package temporarily until we shut them.  Also remember that the Row Count shape can be used as a “Dead End”.  It is useful during development when we want to see the output from a dataflow, but don’t want to update a table or file with the data.  Data viewers are an invaluable tool for both development and debugging.  Just remember to REMOVE THEM before putting your package into production

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • The Rise of Project Intelligence and Why It Matters

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Amy DeWolf Are you doing any of these in your organization? How are you leveraging historical data to forecast projects? There’s a lot going on in government today. The economic pressures agencies feel from the uncertainty of budget cuts and sequestration effect every part of an organization, including the Project Management Office (PMO).  The PMO is responsible for monitoring and administering government IT projects. As time goes on, priorities shift, technology advances, and new regulations are imposed, all of which make planning and executing projects more difficult.  For example, think about your own projects.  How many boxes do you need to check and hoops do you need to jump through to ensure you comply with new regulations? While new regulations and technology advancements can be a good thing, they add an additional layer of complexity to already complex projects. To overcome some of these pressures, particularly new regulations, many in the PMO world are adopting a new approach- Project Intelligence (PI). According to a new Oracle Primavera white paper, The Rise of Project Intelligence: When Project Management is Just Not Enough, “PI uses Business Intelligence methods to leverage historical project data to make more informed decisions and greatly enhance project execution.” Currently, project managers plan and forecast the possible phases in an execution cycle.  However, most project managers don’t have the proper tools to do this as effectively as they would like. As the white paper noted, “The underlying deficiencies in most forecasting approaches are that 1) the PM fails in most instances to leverage historical data and 2) the PM doesn’t employ current Business Intelligence tools.” PI seeks to overturn this by combining modeling tools used in Business Intelligence for projects with the understanding of Emotional Intelligence for managing people.   Simply put, Project Intelligence is built off four main pillars: Actively use historical data to forecast project cycles Understand the intricacies of complex projects Enhance social and emotional intelligence in projects Actively use Business intelligence tools Read our complimentary whitepaper and discover the importance of emotional intelligence and best practices for improving projects, specifically in terms of communication.

    Read the article

  • JavaOne 2012: Nashorn Edition

    - by $utils.escapeXML($entry.author)
    As with my JavaOne 2012: OpenJDK Edition post a while back (now updated to reflect the schedule of the talks), I find it convenient to have my JavaOne schedule ordered by subjects of interest. Beside OpenJDK in all its flavors, another subject I find very exciting is Nashorn. I blogged about the various material on Nashorn in the past, and we interviewed Jim Laskey, the Project Lead on Project Nashorn in the Java Spotlight podcast. So without further ado, here are the JavaOne 2012 talks and BOFs with Nashorn in their title, or abstract:CON5390 - Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM - Monday, Oct 1, 8:30 AM - 9:30 AMThere are many implementations of JavaScript, meant to run either on the JVM or standalone as native code. Both approaches have their respective pros and cons. The Oracle Nashorn JavaScript project is based on the former approach. This presentation goes through the performance work that has gone on in Oracle’s Nashorn JavaScript project to date in order to make JavaScript-to-bytecode generation for execution on the JVM feasible. It shows that the new invoke dynamic bytecode gets us part of the way there but may not quite be enough. What other tricks did the Nashorn project use? The presentation also discusses future directions for increased performance for dynamic languages on the JVM, covering proposed enhancements to both the JVM itself and to the bytecode compiler.CON4082 - Nashorn: JavaScript on the JVM - Monday, Oct 1, 3:00 PM - 4:00 PMThe JavaScript programming language has been experiencing a renaissance of late, driven by the interest in HTML5. Nashorn is a JavaScript engine implemented fully in Java on the JVM. It is based on the Da Vinci Machine (JSR 292) and will be available with JDK 8. This session describes the goals of Project Nashorn, gives a top-level view of how it all works, provides the current status, and demonstrates examples of JavaScript and Java working together.BOF4763 - Meet the Nashorn JavaScript Team - Tuesday, Oct 2, 4:30 PM - 5:15 PMCome to this session to meet the Oracle JavaScript (Project Nashorn) language teamBOF6661 - Nashorn, Node, and Java Persistence - Tuesday, Oct 2, 5:30 PM - 6:15 PMWith Project Nashorn, developers will have a full and modern JavaScript engine available on the JVM. In addition, they will have support for running Node applications with Node.jar. This unique combination of capabilities opens the door for best-of-breed applications combining Node with Java SE and Java EE. In this session, you’ll learn about Node.jar and how it can be combined with Java EE components such as EclipseLink JPA for rich Java persistence. You’ll also hear about all of Node.jar’s mapping, caching, querying, performance, and scaling features.CON10657 - The Polyglot Java VM and Java Middleware - Thursday, Oct 4, 12:30 PM - 1:30 PMIn this session, Red Hat and Oracle discuss the impact of polyglot programming from their own unique perspectives, examining non-Java languages that utilize Oracle’s Java HotSpot VM. You’ll hear a discussion of topics relating to Ruby, Lisp, and Clojure and the intersection of other languages where they may touch upon individual frameworks and projects, and you’ll get perspectives on JavaScript via the Nashorn Project, an upcoming JavaScript engine, developed fully in Java.CON5251 - Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings - Thursday, Oct 4, 2:00 PM - 3:00 PMProject Nashorn is Oracle’s new JavaScript runtime in Java 8. Being a JavaScript runtime running on the JVM, it provides integration with the underlying runtime by enabling JavaScript objects to manipulate Java objects, implement Java interfaces, and extend Java classes. Nashorn is invokedynamic-based, and for its Java integration, it does away with the concept of wrapper objects in favor of direct virtual machine linking to Java objects’ methods provided by a metaobject protocol, providing much higher performance than what could be expected from a scripting runtime. This session looks at the details of the integration, a topic of interest to other language implementers on the JVM and a wider audience of developers who want to understand how Nashorn works.That's 6 sessions tooting the Nashorn this year at JavaOne, up from 2 last year.

    Read the article

  • Perspective Is Everything

    - by juanlarios
    Sitting on a window seat on my way back from Seattle I looked out the window and saw the large body of water. I was reminded of childhood memories of running as hard as I could through burning hot sand with the anticipation of the splash of the ocean. Looking out the window the water appeared like a sheet draped over land. I couldn’t help but ponder how perspective changes everything.  Over the last several days I had a chance to attend the MVP Summit in Redmond. I had a great time with fellow MVP’s and the SharePoint Product Group. Although I can’t say much about what was discussed and what is coming in the future, I want to share some realizations I had while experiencing the MVP summit.  The SharePoint Product is ever-improving, full of innovation but also a reactionary embodiment of MVP, client and market feedback. There are several features that come to mind that clients complain about where I have felt helpless in informing them that the features are not as mature as they would like it. Together, we figure out a way to make it work and deal with the limitations. It became clear that there are features that have taken a different purpose in the market place from the original vision. The SP Product group is working hard to react to these changes in vision and make SharePoint better for real life implementations.  It is easy to think that SharePoint should be all things to all people. In reality there are products that are very detailed in specific composites, they do this one thing well but severely lack in other areas.  Its easy sometimes to say, “What was Microsoft thinking with this feature?” the Product group is doing all they can to make the moving pieces better and dealing with challenges with having all of them work together.  Sometimes the features don’t fully embody the vision because of the many challenges, but trust me when I say the product group is really focused on delivery and innovation.  As I was speaking with a fellow MVP throughout the session, we spoke about the iPad 2(ironically announced this past week during the MVP summit) and Microsoft’s possible product answer; I realized the days of reactionary products from MS is over. There are many users that will remember Vista and the painful execution in that product, but there has been a lot of success in Windows 7. There was no rush for a reactionary answer to the Nintendo Wii, as a result a ground breaking and game changing product was brought to market, the XBOX –Kinect! I can’t say much here, but it’s safe to say, expect innovation, and execution of products and technology that will change the market instead of react to them!       There are many things I learned and I would love to share that have to do with perspective, technology, etc… but this is far as I can go in details. This might not be new to you or specifically the message that was shared during the summit. These are just my impressions of the event and the spirit of future vision. Great things ahead!

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Passing a parameter using RelayCommand defined in the ViewModel (from Josh Smith example)

    - by eesh
    I would like to pass a parameter defined in the XAML (View) of my application to the ViewModel class by using the RelayCommand. I followed Josh Smith's excellent article on MVVM and have implemented the following. XAML Code <Button Command="{Binding Path=ACommandWithAParameter}" CommandParameter="Orange" HorizontalAlignment="Left" Style="{DynamicResource SimpleButton}" VerticalAlignment="Top" Content="Button"/> ViewModel Code public RelayCommand _aCommandWithAParameter; /// <summary> /// Returns a command with a parameter /// </summary> public RelayCommand ACommandWithAParameter { get { if (_aCommandWithAParameter == null) { _aCommandWithAParameter = new RelayCommand( param => this.CommandWithAParameter("Apple") ); } return _aCommandWithAParameter; } } public void CommandWithAParameter(String aParameter) { String theParameter = aParameter; } #endregion I set a breakpoint in the CommandWithAParameter method and observed that aParameter was set to "Apple", and not "Orange". This seems obvious as the method CommandWithAParameter is being called with the literal String "Apple". However, looking up the execution stack, I can see that "Orange", the CommandParameter I set in the XAML is the parameter value for RelayCommand implemenation of the ICommand Execute interface method. That is the value of parameter in the method below of the execution stack is "Orange", public void Execute(object parameter) { _execute(parameter); } What I am trying to figure out is how to create the RelayCommand ACommandWithAParameter property such that it can call the CommandWithAParameter method with the CommandParameter "Orange" defined in the XAML. Is there a way to do this? Why do I want to do this? Part of "On The Fly Localization" In my particular implementation I want to create a SetLanguage RelayCommand that can be bound to multiple buttons. I would like to pass the two character language identifier ("en", "es", "ja", etc) as the CommandParameter and have that be defined for each "set language" button defined in the XAML. I want to avoid having to create a SetLanguageToXXX command for each language supporting and hard coding the two character language identifier into each RelayCommand in the ViewModel.

    Read the article

  • Stack Overflow Problem in DotNetNuke

    - by Vivek
    Hi, I'm getting this error message when I try to access my website. Can someone please tell me what is going on? Thanks. V Server Error in '/' Application. Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.] AspDotNetStorefrontExcelWrapper.ExcelToXml.SetLicense() +0 AspDotNetStorefrontCommon.AppLogic.ApplicationStart() +150 AspDotNetStorefrontDNNComponents.AppStart..cctor() +103 [TypeInitializationException: The type initializer for 'AspDotNetStorefrontDNNComponents.AppStart' threw an exception.] AspDotNetStorefrontDNNComponents.AppStart.Execute() +0 AspDotNetStorefront.HttpModules.InitializerModule.System.Web.IHttpModule.Init(HttpApplication context) +42 System.Web.HttpApplication.InitModulesCommon() +65 System.Web.HttpApplication.InitModules() +43 System.Web.HttpApplication.InitInternal(HttpContext context, HttpApplicationState state, MethodInfo[] handlers) +729 System.Web.HttpApplicationFactory.GetNormalApplicationInstance(HttpContext context) +298 System.Web.HttpApplicationFactory.GetApplicationInstance(HttpContext context) +107 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +289 Version Information: Microsoft .NET Framework Version:2.0.50727.3082; ASP.NET Version:2.0.50727.3082

    Read the article

  • The HTTP verb POST used to access path '/' is not allowed.

    - by Ryan
    The entire error: Server Error in '/' Application. The HTTP verb POST used to access path '/' is not allowed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The HTTP verb POST used to access path '/' is not allowed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpException (0x80004005): The HTTP verb POST used to access path '/' is not allowed.] System.Web.DefaultHttpHandler.BeginProcessRequest(HttpContext context, AsyncCallback callback, Object state) +2871966 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8679410 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 To be honest, I'm not even sure where the error came from. I'm running Visual Studio 2008 through the Virtual Server. I just put a button: <asp:Button ID="btnRegister" runat="server" Text="Register" CssClass="bt_register" onclick="btnRegister_Click" /> On a login user control, the onclick event is just a simple response.redirect Response.Redirect("~/register.aspx"); Debugging the project, it isn't even hitting the btnRegister_Click method anyway. I'm not sure where to even begin with debugging this error. Any information will help. I can post all the code I have, but like I said, I'm not sure where this error is even being thrown at. Edit It has nothing at all to do with the button click event. I got rid of the method and the onclick parameter on the aspx page. Still coming up with the same error problem found Okay so this is for a school project and its a group project. Some one in my group thought it would be a good idea to wrap a form tag around this area telling it to post. Found it doing a diff with a revision on Google code.

    Read the article

  • The trust relationship between the primary domain and the trusted domain failed. ASP.NET 2.0

    - by Dasupalouie
    Anyone run into this issue? Any help would be appretiated :) Server Error in '/CTCWeb' Application. The trust relationship between the primary domain and the trusted domain failed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.SystemException: The trust relationship between the primary domain and the trusted domain failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SystemException: The trust relationship between the primary domain and the trusted domain failed. ] System.Security.Principal.NTAccount.TranslateToSids(IdentityReferenceCollection sourceAccounts, Boolean& someFailed) +1185 System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean& someFailed) +44 System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) +47 System.Security.Principal.WindowsPrincipal.IsInRole(String role) +101 System.Web.Configuration.AuthorizationRule.IsTheUserInAnyRole(StringCollection roles, IPrincipal principal) +123 System.Web.Configuration.AuthorizationRule.IsUserAllowed(IPrincipal user, String verb) +256 System.Web.Configuration.AuthorizationRuleCollection.IsUserAllowed(IPrincipal user, String verb) +199 System.Web.Security.UrlAuthorizationModule.OnEnter(Object source, EventArgs eventArgs) +8771980 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +68 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3053

    Read the article

  • asp.net:Invalid temp directory in chart handler configuration [c:\TempImageFiles\].

    - by veda
    I am getting this error Invalid temp directory in chart handler configuration [c:\TempImageFiles\]. While running my code. Intially I was getting No http handler was found for request type ‘GET’ error which I solved it by referring no http handler But now I am getting the above error The details of the error are Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.DirectoryNotFoundException: Invalid temp directory in chart handler configuration [c:\TempImageFiles\]. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. The stackTrace of this error [DirectoryNotFoundException: Invalid temp directory in chart handler configuration [c:\TempImageFiles\].] System.Web.UI.DataVisualization.Charting.ChartHttpHandlerSettings.Inspect() +851 System.Web.UI.DataVisualization.Charting.ChartHttpHandlerSettings.ParseParams(String parameters) +1759 System.Web.UI.DataVisualization.Charting.ChartHttpHandlerSettings..ctor(String parameters) +619 System.Web.UI.DataVisualization.Charting.ChartHttpHandler.InitializeParameters() +237 System.Web.UI.DataVisualization.Charting.ChartHttpHandler.EnsureInitialized(Boolean hardCheck) +208 System.Web.UI.DataVisualization.Charting.ChartHttpHandler.EnsureInstalled() +33 System.Web.UI.DataVisualization.Charting.Chart.GetImageStorageMode() +57 System.Web.UI.DataVisualization.Charting.Chart.Render(HtmlTextWriter writer) +257 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +144 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +583 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +91 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +410 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +118 System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) +489 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +84 System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) +713 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +144 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +583 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +91 System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) +91 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +410 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +118 System.Web.UI.Control.Render(HtmlTextWriter writer) +60 System.Web.UI.Page.Render(HtmlTextWriter writer) +66 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +144 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +583 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +91 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +7761 Can anyone tell me how to solve this problem... Should i have to create a temporary directory manually or what should i do...

    Read the article

  • Why am I getting: InvalidOperationException: Failed to map the path '/app42/App_Code/'.

    - by serialhobbyist
    I've been working on a little Silverlight utility which calls a Silverlight web service. It works from my dev machine (XPsp2). I've tried publishing it to a 2008 R2 IIS 7.5 server and it doesn't work when trying to contact the web service. So I've tried using the WcfTestClient to connect to the web service. That gave an error. So I turned off CustomErrors and used IE and I get the following. Any idea why? There's no App_Code folder in the app. Failed to map the path '/app42/App_Code/'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Failed to map the path '/app42/App_Code/'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Failed to map the path '/app42/App_Code/'.] System.Web.Configuration.ProcessHostConfigUtils.MapPathActual(String siteName, VirtualPath path) +320 System.Web.Configuration.ProcessHostServerConfig.System.Web.Configuration.IServerConfig.MapPath(IApplicationHost appHost, VirtualPath path) +34 System.Web.Hosting.MapPathBasedVirtualPathEnumerator..ctor(VirtualPath virtualPath, RequestedEntryType requestedEntryType) +169 System.Web.Hosting.MapPathBasedVirtualPathCollection.System.Collections.IEnumerable.GetEnumerator() +43 System.Web.Compilation.CodeDirectoryCompiler.ProcessDirectoryRecursive(VirtualDirectory vdir, Boolean topLevel) +147 System.Web.Compilation.CodeDirectoryCompiler.GetCodeDirectoryAssembly(VirtualPath virtualDir, CodeDirectoryType dirType, String assemblyName, StringSet excludedSubdirectories, Boolean isDirectoryAllowed) +11196502 System.Web.Compilation.BuildManager.CompileCodeDirectory(VirtualPath virtualDir, CodeDirectoryType dirType, String assemblyName, StringSet excludedSubdirectories) +185 System.Web.Compilation.BuildManager.CompileCodeDirectories() +654 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +658 [HttpException (0x80004005): Failed to map the path '/app42/App_Code/'.] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +76 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +1012 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +1025 [HttpException (0x80004005): Failed to map the path '/app42/App_Code/'.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +11301302 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +88 System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +4338644 Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • How to use SMO.Scripter to generate a "full-script" of DB?

    - by ssg
    What I'm trying to do is a very simple task; I'd like to create a script to generate a database along with tables, SPs and UDFs. This is done with a couple of clicks on SSMS interface. However db.Script() only scripts CREATE DATABASE. Ok, so I iterate over objects one by one and script them individually. Now, what I have is an arbitrary order of CREATEs naturally failing during execution because dependent objects aren't created first. Ok so I set WithDependencies flag so dependent objects ARE scripted first. However this causes redundant CREATE scripts for objects that are already created, and causes around 20x growth in SQL file and generation time. Not to mention the errors hit during execution. I don't know if there is a way to mark objects "already walked in dependency tree", it doesn't seem likely. I might be missing a bigger picture somewhere, but MSDN recommends "Scripter" to generate scripts like the one I want. I had used Transfer class before to transfer table definitions but it fails to create a failsafe script. It doesn't make sense to use a Transfer object to generate a script anyway. I want to do this the way it should be done, and without losing my faith in SMO.

    Read the article

  • properties-maven-plugin: Error loading properties-file

    - by yournamehere
    I want to extract all the properties from my pom.xml into a properties-file. These are the common properties like dependency-versions, plugin-versions and directories. I'm using the properties-maven-plugin, but its not working as i want it to. The essential part of my pom.xml: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>properties-maven-plugin</artifactId> <version>1.0-alpha-1</version> <executions> <execution> <phase>initialize</phase> <goals> <goal>read-project-properties</goal> </goals> <configuration> <files> <file>${basedir}/pom.properties</file> </files> </configuration> </execution> </executions> </plugin> Now when i run "mvn properties:read-project-properties" i get the following error: [INFO] One or more required plugin parameters are invalid/missing for 'properties:read-project-properties' [0] Inside the definition for plugin 'properties-maven-plugin' specify the following: <configuration> ... <files>VALUE</files> </configuration>. The pom.properties-file is located in the same dir as the pom.xml. What can i do to let the properties-maven-plugin read my properties-file?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >