Search Results

Search found 837 results on 34 pages for 'jim s'.

Page 5/34 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • GrapeCity &amp; ComponentOne Merger

    - by Jim Duffy
    Big news in the software component industry today… my good friends over at GrapeCity have unofficially announced (official announcement is tomorrow June 11, 2012) that they have acquired ComponentOne and will be merging the two companies. Yes, the people who bring you Spread.Net and ActiveReports have merged with one of the market’s leading developer component vendors and will continue to do business under the ComponentOne brand. I think this move will propel the company to new heights in the component market and I look forward to working with them as they continue being an industry leader. Congratulations guys! UPDATE: Additional information available here: http://our.componentone.com/2012/06/06/thenewcomponentone/. The new company will be called ComponentOne, a Division of GrapeCity.

    Read the article

  • Visual Studio LightSwitch: Yes, these are the droids you&rsquo;re looking for

    - by Jim Duffy
    With all the news and focus on the new features coming in Silverlight 5 I thought I’d take a few minutes to remind folks about the work that Microsoft has done on LightSwitch since the applications created by LightSwitch are Silverlight applications. LightSwitch makes it easier for non-coders to build business applications and easier for coders to maintain them. For those not familiar with LightSwitch, it is a new tool that provides a easier and quicker way for coder and non-coder types alike to create line-of-business applications for the desktop, the web, and the cloud. The target audience for this tool are those power-user types who create Access applications for their organization. While those Access applications fill an immediate need, they typically aren’t very scalable, extendable and/or maintainable by the development staff of the organization. LightSwitch creates applications based on technologies built into Visual Studio thus making it easier for corporate developers to extend and maintain them. LightSwitch is currently in beta but it will ultimately become a new addition to the Visual Studio line of products. Go ahead and download the beta to get a better idea of what the product can do for your organization. The LightSwitch Developer Center contains links to download the beta links to instructional videos links to tutorials links to the LightSwitch Training Kit Another quality resource for LightSwitch information is the Visual Studio LightSwitch Team Blog. My good friend Beth Massi is on the LightSwitch team and has additional valuable content on her blog. Have a day.

    Read the article

  • How to port email from evolution to thunderbird?

    - by jim
    I updated ubuntu to 11.10 using the update notification. I am also switching from Xubuntu to ubuntu - gnome interface. I have been using evolution for years and would like to port the emails to thunderbird. I have looked at the similar questions with no luck and the thunderbird help on manually importing. Most of these assume that the evolution file structure is similar to the evolution file structure. When I set up thunderbird it seems to have imported the contacts from evolution (and actually removed them from evolution. However no mail got transferred. I found the evolution mail in ~/.local/share/evolution/mail/local . this has folders.db and 3 directories - cur ,tmp, and new. then there are the hidden files and directories. Each directory has three related files with extensions .cmeta, .ibex.index, and .ibex.index.data. Then all the directories had files that seem to contain the individual messages. I have not looked at rhyme or reason to the file numbering/naming scheme. is there a nice way to import these files?

    Read the article

  • Disaster Recovery Plan&ndash;Rebuild System Disk (Dell Server 2900 with PERC RAID controller)

    - by Jim Lahman
    Goal: Since the system disk is a RAID 1 mirrored set, we can rebuild the shadow set by replacing one of the good sets with a blank disk Steps Shutdown and power down server Remove the disk from bay 9, which is part of the system shadow set. Put this disk on the shelf Insert blank/old disk into the empty bay     Label the new disk before inserting it into the empty bay       Power up server During the booting process, the following message appears: “Some configured disks have been removed from your system…”       Press ‘C’ to Load Configuration utility             Press 'Y' to confirm to load the foreign configuration       In this example, the system shadow set is Disk Group 2.  (Before proceeding, confirm this is the disk group in your case).  Expanding the physical disks shows a disk in bay 8 and a missing disk in bay 9.  This is correct.   Now, we have to include the new inserted disk in this group       RAID controller reporting bay 9 is empty       There may be times when the new disk is seen as a foreign disk.  In this case, do the following:     Foreign disk is reported in bay 9 CTRL-N (Next Page) to Foreign Mgt All the disk groups will be displayed.  Typically, the disk group containing the foreign disk will be grey.  To remove the foreign disk Highlight Controller Press F2 Select Foreign Select Clear (do NOT import the configuration!)       Clear the foreign configuration Now the disk can be brought into the system shadow set disk group as a hot spare   To include the newly inserted disk into the system shadowset disk group, it must be brought in as a hot spare Highlight Disk Group 2 (VD Management) Hit F2 Select 'Manage Ded. HS'     Manage dedicated hot swap Select the disk in bay 9 (Hit space bar to select) Tab to 'OK'.  Hit the return key     Select hot spare to bring into RAID 1 mirror set   Rebuild automatically commences     Rebuild in process   Restart now or restart after rebuild is completed

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • .Net Rocks Visual Studio 2010 Road Trip coming to Raleigh, NC May 6th

    - by Jim Duffy
    Listen up .NET developers within 50 miles of Research Triangle Park, NC!  Take out that red, blue, green, black or any other color Sharpie marker you fancy and circle May 6th! Fellow Microsoft Regional Directors Carl Franklin and Richard Campbell are going to be bringing the .Net Rocks Visual Studio 2010 Road Trip to town. What’s that you say, you’ve never been to a .Net Rocks Road Trip event and don’t know what to expect? Let me help with that. I stol… uhhh… I mean I was “inspired” by some content I found on the event information page. “Carl and Richard are loading up the DotNetMobile (a 30 foot RV) and driving to your town again to show off their favorite bits of Visual Studio 2010 and .NET 4.0! Richard talks about Web load testing and Carl talks about Silverlight 4.0 and multimedia. And to make the night even more fun, we’re going to bring a mystery rock star from the Visual Studio world to the event and interview them for a special .NET Rocks Road Trip show series. Along the way we’ll be giving away some great prizes, showing off some awesome technology and having a ton of laughs. So come out to the most fun you can have in a geeky evening – and learn a few things along the way about web load testing and Silverlight 4!“   I know I’ll be there so what are you waiting for? Head over to the event registration page and sign up today! Have a day. :-|

    Read the article

  • Time Travel 101

    - by Jim Duffy
    I’m thinking maybe I should have used Time Crunching 101 as the title instead… or maybe ‘Duh Duffy, where have you been? Everyone knows that!” Ok, so maybe you won’t actually learn how to travel through time from this post but you will learn how to cram more learning into one day. We all know you can’t make it to every conference, every presentation, or every training session. The good news is that many of those events make their content available to either watch online or to download for off-line viewing. The problem is who has time to sit and watch all those presentations in real time? Not me. One trick I use is to view the content at an increased play rate. Why listen to a boring speaker like me drone on for the entire length of the session when you can listen to them drone on in almost half the time. :-) I view nearly all off-line content with Windows Media Player though I’m sure you can implement this idea with any media playback software. The idea is changing the playback speed you view the content at. With Windows Media Player you can change the play speed from the menu system. Once you have the Play Speed Setting panel open you can specify the playback speed. Depending on the content and the presenter I can typically listen between 1.6 and 2.0 times normal speed. My Florida edumacation taught me that playing the video back at twice the speed means I’ll listen to it twice as fast and that means I can view it in almost 1/2 the time.  Too bad it won’t make me twice as smart. :-) I hope this helps you speed your way through more training content. Have a day. :-|

    Read the article

  • What is the Xbox360's D3DRS_VIEWPORTENABLE equivalent on WinXP D3D9?

    - by Jim Buck
    I posted this on StackOverlow, but of course it should be posted here. I am maintaining a multiplatform codebase for Xbox360 and WinXP. I am seeing an issue on the XP side that appears to be related to D3DRS_VIEWPORTENABLE on the Xbox360 version not having an equivalent on WinXP D3D9. This article had an interesting idea, but the only way to construct an identity matrix is to supply negative numbers to D3DVIEWPORT9::X and D3DVIEWPORT9::Height, but they are unsigned numbers. (I tried to put in negative numbers anyway, but nothing interesting happened.) So, how does one emulate the behavior of D3DRS_VIEWPORTENABLE under WinXP/D3D9? (For clarity, the result I'm seeing is that a 2d screen-aligned quad works fine on Xbox360 but is offset/stretched on WinXP. In fact, the (0, 0) starts in the center of the screen on WinXP instead of in the lower-left corner like on the Xbox360 as a result of applying the viewport transform.) Update: I didn't have an Xbox360 devkit at the time I wrote up this question, but I've since gotten one. I commented out the disabling of the D3DRS_VIEWPORTENABLE state, and the exact same behavior resulted on the Xbox360 as on the WinXP build. So, there must be some DirectX magic to bridge the gap here for emulating D3DRS_VIEWPORTENABLE being turned off on WinXP.

    Read the article

  • Keyboard locking up in Visual Studio 2010

    - by Jim Wang
    One of the initiatives I’m involved with on the ASP.NET and Visual Studio teams is the Tactical Test Team (TTT), which is a group of testers who dedicate a portion of their time to roaming around and testing different parts of the product.  What this generally translates to is a day and a bit a week helping out with areas of the product that have been flagged as risky, or tackling problems that span both ASP.NET and Visual Studio.  There is also a separate component of this effort outside of TTT which is to help with customer scenarios and design. I enjoy being on TTT because it allows me the opportunity to look at the entire product and gain expertise in a wide range of areas.  This week, I’m looking at Visual Studio 2010 performance problems, and this gem with the keyboard in Visual Studio locking up ended up catching my attention. First of all, here’s a link to one of the many Connect bugs describing the problem: Microsoft Connect I like this problem because it really highlights the challenges of reproducing customer bugs.  There aren’t any clear steps provided here, and I don’t know a lot about your environment: not just the basics like our OS version, but also what third party plug-ins or antivirus software you might be running that might contribute to the problem.  In this case, my gut tells me that there is more than one bug here, just by the sheer volume of reports.  Here’s another thread where users talk about it: Microsoft Connect The volume and different configurations are staggering.  From a customer perspective, this is a very clear cut case of basic functionality not working in the product, but from our perspective, it’s hard to find something reproducible: even customers don’t quite agree on what causes the problem (installing ReSharper seems to cause a problem…or does it?). So this then, is the start of a QA investigation. If anybody has isolated repro steps (just comment on this post) that they can provide this will immensely help us nail down the issue(s), but I’ll be doing a multi-part series on my progress and methodologies as I look into the problem.

    Read the article

  • Silverlight Firestarter 2010 Keynote with Scott Guthrie: Silverlight has a bright future!

    - by Jim Duffy
    If you didn’t get chance to watch the Silverlight Firestart event live during the webcast it is available online to view now. If you’re a Silverlight developer or perhaps a shop actively planning on developing a Silverlight application then you’re going to want to watch this video. The Silverlight 5 feature set unveiled during the keynote is fantastic! I particularly like Scott’s approach and comments on the future of Silverlight. I appreciated his open and direct acknowledgment that there has “been a lot of angst on this topic in the last few weeks” and he took the bull by the horns and stated “Let me say up front that there is a Silverlight future, and we think it’s going to be a very bright one.” That comment drew applause from the local audience and in our local viewing event held in Raleigh, NC. Of course my first question was when can we get our grubby little hands on Silverlight 5 and start working with it. The answer unfortunately wasn’t “right now” but they did announce the Silverlight 5 beta will be available in the first half of 2011. Of course the following is pure speculation on my part but I wouldn’t be surprised if they made it available at a certain event in April 2011. Additional information about the Silverlight 5 announcement is available on Scott’s blog. Have a day.

    Read the article

  • Flash is not working in Chrome

    - by Jim Ford
    I have google Chrome 8.0.552.237 on Ubuntu 10.10 64-bit and flash is not working, I have tried a variety of methods to install flash, including Firefox flash-aid and the flash-installer package and nothing is working for me. I have even uninstalled and reinstalled chrome to no avail. I get "missing plugin" message where flash plugin should be in a website. What am I missing? UPDATE I have a variety of plugins returned by jgbelacqua's command: /usr/lib/chromium-browser/plugins/flashplugin-alternative.so /usr/lib/firefox/plugins/flashplugin-alternative.so /usr/lib/flashplugin-installer/libflashplayer.so /usr/lib/iceape/plugins/flashplugin-alternative.so /usr/lib/iceweasel/plugins/flashplugin-alternative.so /usr/lib/midbrowser/plugins/flashplugin-alternative.so /usr/lib/mozilla/plugins/flashplugin-alternative.so /usr/lib/xulrunner/plugins/flashplugin-alternative.so /usr/lib/xulrunner-addons/plugins/flashplugin-alternative.so /usr/share/ubufox/plugins/libflashplayer.so /var/cache/flashplugin-installer/libflashplayer.so I'm not sure which is necessary and which not. I should note tho that my Chromium does have flash and it does work... just not chrome or firefox.

    Read the article

  • How should I update my name server after I installed a new dedicated server?

    - by Jim Thio
    Say I got a dedi. The IP is 123.123.123.123 Now I got domain name domainname.com that will be the "main" domain name for that server. Should I? Set the name server of the domainname.com to ns1.domainname.com and ns2.domainname.com Add child nameserver ns1.domainname.com and ns2.domainname.com to point to that exact IP. or Should I? Point the name server to my registrar name server. Set an A address of the name server to point to my IP. Which one is right? Obviously I want ns1.domainname.com and ns2.domainname.com to point to my IP so I can then point hundreds of domains to that IP. But how exactly I should do that? Specifically I simply use cpanel. Centosh with cpanel.

    Read the article

  • Efficiently checking input and firing events

    - by Jim
    I'm writing an InputHandler class in XNA, and there are several different keys considered valid input (all of type Microsoft.XNA.Framework.Input.Keys). For each key, I have three events: internal event InputEvent XYZPressed; internal event InputEvent XYZHeld; internal event InputEvent XYZReleased; where XYZ is the name of the Keys object representing that key. To fire these events, I have the following for each key: if (Keyboard.GetState().IsKeyDown(XYZ)) { if (PreviousKeyState.IsKeyDown(XYZ)) { if (XYZHeld != null) XYZHeld(); } else { if (XYZPressed != null) XYZPressed(); } } else if (PreviousKeyState.IsKeyDown(XYZ)) { if (XYZReleased != null) XYZReleased(); } However, this is a lot of repeated code (the above needs to be repeated for each input key). Aside from being a hassle to write, if any keys are added to/removed from the keys (if functionality is added/removed), a new section needs to be added (or an existing one removed). Is there a cleaner way to do this? Perhaps something along the lines of foreach key check which state it's in fire this key's event for that state where the code does the foreach (automatically checking exactly those keys that "exist") rather than the coder?

    Read the article

  • How to speed up this simple mysql query?

    - by Jim Thio
    The query is simple: SELECT TB.ID, TB.Latitude, TB.Longitude, 111151.29341326*SQRT(pow(-6.185-TB.Latitude,2)+pow(106.773-TB.Longitude,2)*cos(-6.185*0.017453292519943)*cos(TB.Latitude*0.017453292519943)) AS Distance FROM `tablebusiness` AS TB WHERE -6.2767668133836 < TB.Latitude AND TB.Latitude < -6.0932331866164 AND FoursquarePeopleCount >5 AND 106.68123318662 < TB.Longitude AND TB.Longitude <106.86476681338 ORDER BY Distance See, we just look at all business within a rectangle. 1.6 million rows. Within that small rectangle there are only 67,565 businesses. The structure of the table is 1 ID varchar(250) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 2 Email varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 3 InBuildingAddress varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 4 Price int(10) Yes NULL Change Change Drop Drop More Show more actions 5 Street varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 6 Title varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 7 Website varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 8 Zip varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 9 Rating Star double Yes NULL Change Change Drop Drop More Show more actions 10 Rating Weight double Yes NULL Change Change Drop Drop More Show more actions 11 Latitude double Yes NULL Change Change Drop Drop More Show more actions 12 Longitude double Yes NULL Change Change Drop Drop More Show more actions 13 Building varchar(200) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 14 City varchar(100) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 15 OpeningHour varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 16 TimeStamp timestamp on update CURRENT_TIMESTAMP No CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP Change Change Drop Drop More Show more actions 17 CountViews int(11) Yes NULL Change Change Drop Drop More Show more actions The indexes are: Edit Edit Drop Drop PRIMARY BTREE Yes No ID 1965990 A Edit Edit Drop Drop City BTREE No No City 131066 A Edit Edit Drop Drop Building BTREE No No Building 21 A YES Edit Edit Drop Drop OpeningHour BTREE No No OpeningHour (255) 21 A YES Edit Edit Drop Drop Email BTREE No No Email (255) 21 A YES Edit Edit Drop Drop InBuildingAddress BTREE No No InBuildingAddress (255) 21 A YES Edit Edit Drop Drop Price BTREE No No Price 21 A YES Edit Edit Drop Drop Street BTREE No No Street (255) 982995 A YES Edit Edit Drop Drop Title BTREE No No Title (255) 1965990 A YES Edit Edit Drop Drop Website BTREE No No Website (255) 491497 A YES Edit Edit Drop Drop Zip BTREE No No Zip (255) 178726 A YES Edit Edit Drop Drop Rating Star BTREE No No Rating Star 21 A YES Edit Edit Drop Drop Rating Weight BTREE No No Rating Weight 21 A YES Edit Edit Drop Drop Latitude BTREE No No Latitude 1965990 A YES Edit Edit Drop Drop Longitude BTREE No No Longitude 1965990 A YES The query took forever. I think there has to be something wrong there. Showing rows 0 - 29 ( 67,565 total, Query took 12.4767 sec)

    Read the article

  • Keyboard locking up in Visual Studio 2010, Part 2

    - by Jim Wang
    Last week I posted about looking into the keyboard locking up issue in Visual Studio.  So far it looks like not a lot of people have replied to provide concrete repro steps, which confirms my suspicion that this is somewhat of a random issue. So at this point, I have a couple of choices.  I can either wait for somebody in the community to provide a repro of the problem that I can reliably run into, or I can do the work myself. I’m going to do both, so while I’m waiting for more possible bug reports, I’m going to write a tool that models the behavior of a typical Visual Studio user and use that to hopefully isolate the problem. I’ve chosen to go with this path since given the information in the bug reports, it seems people hit the issue with many different configurations in many different scenarios.  This means that me sitting down without any solid repro steps is likely not going to be a good use of time.  Instead, I’m going to go with a model-based testing approach where I will define a series of actions that a user in VS can do, and then proceed to run my model.  I’ll let you guys know how this works out for isolating bugs :) I’m using an internal tool for the model engine and AutoIt for the UI automation (I want something lightweight for a one-off).  One of the challenges will be getting feedback: AutoIt is great at driving, but not so great at understanding what success and failure means.

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • Here we go again - quest for web hosted forum via javascript

    - by jim
    Hello all, disclaimer If this is the wrong location for this question, then please advise me accordingly. backgound I've been using Disqus and intense debate as a 'comments' service for a variety of my sites to great effect and love the fact that i get alot of the facebook/twitter integration 'for free', as well as the SEO benefits. request To this end, does anyone out there know of similar services that can be used to pull entire forums/threaded discussions into the app in a similar fashion (i.e. via ajax webservices). google has been at a loss to turn anything up on this front and i'm therefore wondeing if it's unlikely that such a 'service' exists. respect hope this stikes a chord out there... btw - altho using this in asp.net mvc, I'm aware that this technology could be used on any platform capable of consuming javascript via ajax, thus the wide spread of 'tags'.

    Read the article

  • Flash is not working in Chrome (Crossover Linux is installed)

    - by Jim Ford
    I have google Chrome 8.0.552.237 on Ubuntu 10.10 64-bit and flash is not working, I have tried a variety of methods to install flash, including Firefox flash-aid and the flash-installer package and nothing is working for me. I have even uninstalled and reinstalled chrome to no avail. I get "missing plugin" message where flash plugin should be in a website. What am I missing? I have a variety of plugins returned by jgbelacqua's command: /usr/lib/chromium-browser/plugins/flashplugin-alternative.so /usr/lib/firefox/plugins/flashplugin-alternative.so /usr/lib/flashplugin-installer/libflashplayer.so /usr/lib/iceape/plugins/flashplugin-alternative.so /usr/lib/iceweasel/plugins/flashplugin-alternative.so /usr/lib/midbrowser/plugins/flashplugin-alternative.so /usr/lib/mozilla/plugins/flashplugin-alternative.so /usr/lib/xulrunner/plugins/flashplugin-alternative.so /usr/lib/xulrunner-addons/plugins/flashplugin-alternative.so /usr/share/ubufox/plugins/libflashplayer.so /var/cache/flashplugin-installer/libflashplayer.so I'm not sure which is necessary and which not. I should note tho that my Chromium does have flash and it does work... just not chrome or firefox.

    Read the article

  • Perl - can't flush STDOUT or STDERR

    - by Jim Salter
    Perl 5.14 from stock Ubuntu Precise repos. Trying to write a simple wrapper to monitor progress on copying from one stream to another: use IO::Handle; while ($bufsize = read (SOURCE, $buffer, 1048576)) { STDERR->printflush ("Transferred $xferred of $sendsize bytes\n"); $xferred += $bufsize; print TARGET $buffer; } This does not perform as expected (writing a line each time the 1M buffer is read). I end up seeing the first line (with a blank value of $xferred), and then the 7th and 8th lines (on an 8MB transfer). Been pounding my brains out on this for hours - I've read the perldocs, I've read the classic "Suffering from Buffering" article, I've tried everything from select and $|++ to IO::Handle to binmode (STDERR, "::unix") to you name it. I've also tried flushing TARGET with each line using IO::Handle (TARGET-flush). No dice. Has anybody else ever encountered this? I don't have any ideas left. Sleeping one second "fixes" the problem, but obviously I don't want to sleep a second every time I read a buffer just so my progress will output on the screen! FWIW, the problem is exactly the same whether I'm outputting to STDERR or STDOUT.

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems?

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Do you share my concern? And if so, is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • Visual Studio 2010 and .NET Framework 4 Training Kit

    - by Jim Duffy
    Now that you’ve had time to download and install Visual Studio 2010 its time to start learning about all the new features and capabilities. That’s where this post comes in. Microsoft released the Visual Studio 2010 and .NET Framework 4 Training Kit on the same day Visual Studio 2010 became available to download. It contains presentations, hands-on labs, and demos on a variety of features and framework technologies including: C# 4 Visual Basic 10 F# Parallel Extensions Windows Communication Foundation Windows Workflow Windows Presentation Foundation ASP.NET 4 Windows 7 Entity Framework ADO.NET Data Services Managed Extensibility Framework Visual Studio Team System As you can see the Developer & Platform Evangelism group has gone the extra mile to make sure you have the resources you need to fully leverage the power of Microsoft’s latest version of Visual Studio. Have a day. :-|

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >