Search Results

Search found 6591 results on 264 pages for 'community voice'.

Page 244/264 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • XSLT templates and recursion

    - by user333411
    Hi All, Im new to XSLT and am having some problems trying to format an XML document which has recursive nodes. My XML Code: Hopefully my XML shows: All <item> are nested with <items> An item can have either just attributes, or sub nodes The level to which <item> nodes are nested can be infinently deep <?xml version="1.0" encoding="utf-8" ?> - <items> <item groupID="1" name="Home" url="//" /> - <item groupID="2" name="Guides" url="/Guides/"> - <items> - <item groupID="26" name="Online-Poker-Guide" url="/Guides/Online-Poker-Guide/"> - <items> - <item> <id>107</id> - <title> - <![CDATA[ Poker Betting - Online Poker Betting Structures ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-betting-structures ]]> </url> </item> - <item> <id>114</id> - <title> - <![CDATA[ Beginners&#39; Poker - Poker Hand Ranking ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-hand-ranking ]]> </url> </item> - <item> <id>115</id> - <title> - <![CDATA[ Poker Terms - 4th Street and 5th Street ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-poker-terms ]]> </url> </item> - <item> <id>116</id> - <title> - <![CDATA[ Popular Poker - The Popularity of Texas Hold&#39;em ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-popularity-texas-holdem ]]> </url> </item> - <item> <id>364</id> - <title> - <![CDATA[ The Impact of Traditional Poker on Online Poker (and vice versa) ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-tradional-vs-online ]]> </url> </item> - <item> <id>365</id> - <title> - <![CDATA[ The Ultimate, Absolute Online Poker Scandal ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/online-poker-scandal ]]> </url> </item> </items> - <items> - <item groupID="27" name="Beginners-Poker" url="/Guides/Online-Poker-Guide/Beginners-Poker/"> - <items> + <item> <id>101</id> - <title> - <![CDATA[ Poker Betting - All-in On the Flop ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-all-in-on-the-flop ]]> </url> </item> + <item> <id>102</id> - <title> - <![CDATA[ Beginners&#39; Poker - Choosing an Online Poker Room ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-choosing-a-room ]]> </url> </item> + <item> <id>105</id> - <title> - <![CDATA[ Beginners&#39; Poker - Choosing What Type of Poker to Play ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-choosing-type-to-play ]]> </url> </item> + <item> <id>106</id> - <title> - <![CDATA[ Online Poker - Different Types of Online Poker ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker ]]> </url> </item> + <item> <id>109</id> - <title> - <![CDATA[ Online Poker - Opening an Account at an Online Poker Site ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-opening-an-account ]]> </url> </item> + <item> <id>111</id> - <title> - <![CDATA[ Beginners&#39; Poker - Poker Glossary ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/beginners-poker-glossary ]]> </url> </item> + <item> <id>117</id> - <title> - <![CDATA[ Poker Betting - What is a Blind? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-what-is-a-blind ]]> </url> </item> - <item> <id>118</id> - <title> - <![CDATA[ Poker Betting - What is an Ante? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/poker-betting-what-is-an-ante ]]> </url> </item> + <item> <id>119</id> - <title> - <![CDATA[ Beginners Poker - What is Bluffing? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-bluffing ]]> </url> </item> - <item> <id>120</id> - <title> - <![CDATA[ Poker Games - What is Community Card Poker? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-community-card-poker ]]> </url> </item> - <item> <id>121</id> - <title> - <![CDATA[ Online Poker - What is Online Poker? ]]> </title> - <url> - <![CDATA[ /Guides/Online-Poker-Guide/Beginners-Poker/online-poker-what-is-online-poker ]]> </url> </item> </items> </item> </items> </item> </items> </item> </items> The XSL code: <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="html" indent="yes"/> <xsl:template name="loop"> <xsl:for-each select="items/item"> <ul> <li><xsl:value-of select="@name" /></li> <xsl:if test="@name and child::node()"> <ul> <xsl:for-each select="items/item"> <li><xsl:value-of select="@name" />test</li> </xsl:for-each> </ul> <xsl:call-template name="loop" /> </xsl:if> <xsl:if test="child::node() and not(@name)"> <xsl:for-each select="/items"> <li><xsl:value-of select="id" /></li> </xsl:for-each> </xsl:if> </ul> </xsl:for-each> <xsl:for-each select="item/items/item"> <li>hi</li> </xsl:for-each> </xsl:template> <xsl:template match="/" name="test"> <xsl:call-template name="loop" /> </xsl:template> </xsl:stylesheet> Im trying to write the XSL so that every <items> node will render a <ul> and every <items> node will render an <li>. The XSL needs to be recursive because i cant tell how deep the nested nodes will go. Can anyone help? Regards, Al

    Read the article

  • Using a Predicate as a key to a Dictionary

    - by Tom Hines
    I really love Linq and Lambda Expressions in C#.  I also love certain community forums and programming websites like DaniWeb. A user on DaniWeb posted a question about comparing the results of a game that is like poker (5-card stud), but is played with dice. The question stemmed around determining what was the winning hand.  I looked at the question and issued some comments and suggestions toward a potential answer, but I thought it was a neat homework exercise. [A little explanation] I eventually realized not only could I compare the results of the hands (by name) with a certain construct – I could also compare the values of the individual dice with the same construct. That piece of code eventually became a Dictionary with the KEY as a Predicate<int> and the Value a Func<T> that returns a string from the another structure that contains the mapping of an ENUM to a string.  In one instance, that string is the name of the hand and in another instance, it is a string (CSV) representation of of the digits in the hand. An added benefit is that the digits re returned in the order they would be for a proper poker hand.  For instance the hand 1,2,5,3,1 would be returned as ONE_PAIR (1,1,5,3,2). [Getting to the point] 1: using System; 2: using System.Collections.Generic; 3:   4: namespace DicePoker 5: { 6: using KVP_E2S = KeyValuePair<CDicePoker.E_DICE_POKER_HAND_VAL, string>; 7: public partial class CDicePoker 8: { 9: /// <summary> 10: /// Magical construction to determine the winner of given hand Key/Value. 11: /// </summary> 12: private static Dictionary<Predicate<int>, Func<List<KVP_E2S>, string>> 13: map_prd2fn = new Dictionary<Predicate<int>, Func<List<KVP_E2S>, string>> 14: { 15: {new Predicate<int>(i => i.Equals(0)), PlayerTie},//first tie 16:   17: {new Predicate<int>(i => i > 0), 18: (m => string.Format("Player One wins\n1={0}({1})\n2={2}({3})", 19: m[0].Key, m[0].Value, m[1].Key, m[1].Value))}, 20:   21: {new Predicate<int>(i => i < 0), 22: (m => string.Format("Player Two wins\n2={2}({3})\n1={0}({1})", 23: m[0].Key, m[0].Value, m[1].Key, m[1].Value))}, 24:   25: {new Predicate<int>(i => i.Equals(0)), 26: (m => string.Format("Tie({0}) \n1={1}\n2={2}", 27: m[0].Key, m[0].Value, m[1].Value))} 28: }; 29: } 30: } When this is called, the code calls the Invoke method of the predicate to return a bool.  The first on matching true will have its value invoked. 1: private static Func<DICE_HAND, E_DICE_POKER_HAND_VAL> GetHandEval = dh => 2: map_dph2fn[map_dph2fn.Keys.Where(enm2fn => enm2fn(dh)).First()]; After coming up with this process, I realized (with a little modification) it could be called to evaluate the individual values in the dice hand in the event of a tie. 1: private static Func<List<KVP_E2S>, string> PlayerTie = lst_kvp => 2: map_prd2fn.Skip(1) 3: .Where(x => x.Key.Invoke(RenderDigits(dhPlayerOne).CompareTo(RenderDigits(dhPlayerTwo)))) 4: .Select(s => s.Value) 5: .First().Invoke(lst_kvp); After that, I realized I could now create a program completely without “if” statements or “for” loops! 1: static void Main(string[] args) 2: { 3: Dictionary<Predicate<int>, Action<Action<string>>> main = new Dictionary<Predicate<int>, Action<Action<string>>> 4: { 5: {(i => i.Equals(0)), PlayGame}, 6: {(i => true), Usage} 7: }; 8:   9: main[main.Keys.Where(m => m.Invoke(args.Length)).First()].Invoke(Display); 10: } …and there you have it. :) ZIPPED Project

    Read the article

  • Goto for the Java Programming Language

    - by darcy
    Work on JDK 8 is well-underway, but we thought this late-breaking JEP for another language change for the platform couldn't wait another day before being published. Title: Goto for the Java Programming Language Author: Joseph D. Darcy Organization: Oracle. Created: 2012/04/01 Type: Feature State: Funded Exposure: Open Component: core/lang Scope: SE JSR: 901 MR Discussion: compiler dash dev at openjdk dot java dot net Start: 2012/Q2 Effort: XS Duration: S Template: 1.0 Reviewed-by: Duke Endorsed-by: Edsger Dijkstra Funded-by: Blue Sun Corporation Summary Provide the benefits of the time-testing goto control structure to Java programs. The Java language has a history of adding new control structures over time, the assert statement in 1.4, the enhanced for-loop in 1.5,and try-with-resources in 7. Having support for goto is long-overdue and simple to implement since the JVM already has goto instructions. Success Metrics The goto statement will allow inefficient and verbose recursive algorithms and explicit loops to be replaced with more compact code. The effort will be a success if at least twenty five percent of the JDK's explicit loops are replaced with goto's. Coordination with IDE vendors is expected to help facilitate this goal. Motivation The goto construct offers numerous benefits to the Java platform, from increased expressiveness, to more compact code, to providing new programming paradigms to appeal to a broader demographic. In JDK 8, there is a renewed focus on using the Java platform on embedded devices with more modest resources than desktop or server environments. In such contexts, static and dynamic memory footprint is a concern. One significant component of footprint is the code attribute of class files and certain classes of important algorithms can be expressed more compactly using goto than using other constructs, saving footprint. For example, to implement state machines recursively, some parties have asked for the JVM to support tail calls, that is, to perform a complex transformation with security implications to turn a method call into a goto. Such complicated machinery should not be assumed for an embedded context. A better solution is just to expose to the programmer the desired functionality, goto. The web has familiarized users with a model of traversing links among different HTML pages in a free-form fashion with some state being maintained on the side, such as login credentials, to effect behavior. This is exactly the programming model of goto and code. While in the past this has been derided as leading to "spaghetti code," spaghetti is a tasty and nutritious meal for programmers, unlike quiche. The invokedynamic instruction added by JSR 292 exposes the JVM's linkage operation to programmers. This is a low-level operation that can be leveraged by sophisticated programmers. Likewise, goto is a also a low-level operation that should not be hidden from programmers who can use more efficient idioms. Some may object that goto was consciously excluded from the original design of Java as one of the removed feature from C and C++. However, the designers of the Java programming languages have revisited these removals before. The enum construct was also left out only to be added in JDK 5 and multiple inheritance was left out, only to be added back by the virtual extension method methods of Project Lambda. As a living language, the needs of the growing Java community today should be used to judge what features are needed in the platform tomorrow; the language should not be forever bound by the decisions of the past. Description From its initial version, the JVM has had two instructions for unconditional transfer of control within a method, goto (0xa7) and goto_w (0xc8). The goto_w instruction is used for larger jumps. All versions of the Java language have supported labeled statements; however, only the break and continue statements were able to specify a particular label as a target with the onerous restriction that the label must be lexically enclosing. The grammar addition for the goto statement is: GotoStatement: goto Identifier ; The new goto statement similar to break except that the target label can be anywhere inside the method and the identifier is mandatory. The compiler simply translates the goto statement into one of the JVM goto instructions targeting the right offset in the method. Therefore, adding the goto statement to the platform is only a small effort since existing compiler and JVM functionality is reused. Other language changes to support goto include obvious updates to definite assignment analysis, reachability analysis, and exception analysis. Possible future extensions include a computed goto as found in gcc, which would replace the identifier in the goto statement with an expression having the type of a label. Testing Since goto will be implemented using largely existing facilities, only light levels of testing are needed. Impact Compatibility: Since goto is already a keyword, there are no source compatibility implications. Performance/scalability: Performance will improve with more compact code. JVMs already need to handle irreducible flow graphs since goto is a VM instruction.

    Read the article

  • Adopt-a-JSR for Java EE 7 - Getting Started

    - by arungupta
    Adopt-a-JSR is an initiative started by JUG leaders to encourage JUG members to get involved in a JSR, in order to increase grass roots participation. This allows JUG members to provide early feedback to specifications before they are finalized in the JCP. The standards in turn become more complete and developer-friendly after getting feedback from a wide variety of audience. adoptajsr.org provide more details about the logistics and benefits for you and your JUG. A similar activity was conducted for OpenJDK as well. Markus Eisele also provide a great introduction to the program (in German). Java EE 7 (JSR 342) is scheduled to go final in Q2 2013. There are several new JSRs that are getting included in the platform (e.g. WebSocket, JSON, and Batch), a few existing ones are getting an overhaul (e.g. JAX-RS 2 and JMS 2), and several other getting minor updates (e.g. JPA 2.1 and Servlets 3.1). Each Java EE 7 JSR can leverage your expertise and would love your JUG to adopt a JSR. What does it mean to adopt a JSR ? Your JUG is going to identify a particular JSR, or multiple JSRs, that is of interest to the JUG members. This is mostly done by polling/discussing on your local JUG members list. Your JUG will download and review the specification(s) and javadocs for clarity and completeness. The complete set of Java EE 7 specifications, their download links, and EG archives are listed here. glassfish.org/adoptajsr provide specific areas where different specification leads are looking for feedback. Your JUG can then think of a sample application that can be built using the chosen specification(s). An existing use case (from work or a personal hobby project) may be chosen to be implemented instead. This is where your creativity and uniqueness comes into play. Most of the implementations are already integrated in GlassFish 4 and others will be integrated soon. You can also explore integration of multiple technologies and provide feedback on the simplicity and ease-of-use of the programming model. Especially look for integration with existing Java EE technologies and see if you find any discrepancies. Report any missing features that may be included in future release of the specification. The most important part is to provide feedback by filing bugs on the corresponding spec or RI project. Any thing that is not clear either in the spec or implementation should be filed as a bug. This is what will ensure that specification and implementation leads are getting the required feedback and improving the quality of the final deliverable of the JSR. How do I get started ? A simple way to get started can be achieved by following S.M.A.R.T. as explained below. Specific Identify who all will be involved ? What would you like to accomplish ? For example, even though building a sample app will provide real-world validity of the API but because of time constraints you may identify that reviewing the specification and javadocs only can be accomplished. Establish a time frame by which the activities need to be complete. Measurable Define a success for metrics. For example, this could be the number of bugs filed. Remember, quality of bugs is more important that quantity of bugs. Define your end goal, for example, reviewing 4 chapters of the specification or completing the sample application. Create a dashboard that will highlight your JUG's contribution to this effort. Attainable Make sure JUG members understand the time commitment required for providing feedback. This can vary based upon the level of involvement (any is good!) and the number of specifications picked. adoptajsr.org defines different categories of involvement. Once again, any level of involvement is good. Just reviewing a chapter, a section, or javadocs for your usecase is helpful. Relevant Pick JSRs that JUG members are willing and able to work. If the JUG members are not interested then they might loose motivation half-way through. The "able" part is tricky as you can always stretch yourself and learn a new skill ;-) Time-bound Define a time table of activities with clearly defined tasks. A tentative time table may look like: Dec 25: Discuss and agree upon the specifications with JUG Jan 1: Start Adopt-a-JSR for Java EE 7 Jan 15: Initial spec reading complete. Keep thinking through the application that will be implemented. Jan 22: Early design of the sample application is ready Jan 29: JUG members agree upon the application Next 4 weeks: Implement the application Of course, you'll need to alter this based upon your commitment. Maintaining an activity dashboard will help you monitor and track the progress. Make sure to keep filing bugs through out the process! 12 JUGs from around the world (SouJava, Campinas JUG, Chennai JUG, London Java Community, BeJUG, Morocco JUG, Peru JUG, Indonesia JUG, Congo JUG, Silicon Valley JUG, Madrid JUG, and Houston JUG) have already adopted one of the Java EE 7 JSRs. I'm already helping some JUGs bootstrap and would love to help your JUG too. What are you waiting for ?

    Read the article

  • Visual Studio &amp; TFS 11 &ndash; List of extensions and upgrades

    - by terje
    This post is a list of the extensions I recommend for use with Visual Studio 11. It’s coming up all the time – what to install, where are the download sites, last version, etc etc, and thus I thought it better to post it here and keep it updated. The basics are Visual Studio 11 connected to a Team Foundation Server 11. Note that we now are at Beta time, and that also many live in a side-by-side environment with Visual Studio 2010.  The side-by-side is supported by VS 11. However, if you installed a component supporting VS11 before you installed VS11, then you need to reinstall it.  The VSIX installer will understand that it is to apply those only for VS11, and will not touch – nor remove – the same for VS2010. A good example here is the Power Commands. The list is more or less in priority order. The focus is to get a setup which can be used for a complete coding experience for the whole ALM process. The list of course reflects what I use for my work , so it is by no means complete, and for some of the tools there are equally useful alternatives. Many components have not yet arrived with VS11 support.  I will add them as they arrive.  The components directly associated with Visual Studio from Microsoft should be common, see the Microsoft column. If you still need the VS2010 extensions, here they are: The extensions for VS 2010.   Components ready for VS 11, both upgrades and new ones Product Notes Latest Version License Applicable to Microsoft TFS Power Tools Beta 111 Side-by-side with TFS 2010 should work, but remove the Shell Extension from the TFS 2010 power tool first. March 2012(11.0.50321.0) Free TFS integration Yes ReSharper EAP for Beta 11 (updates very often, nearly daily) 7.0.3.261 pr. 16/3/2012 Free as EAP, Licensed later Coding & Quality No Power Commands1 Just reinstall, even if you already have it for VS2010. The reinstall will then apply it to VS 11 1.0.2.3 Free Coding Yes Visualization and Modelling SDK for beta Info here and here. Another download site and info here. Also download from MSDN Subscription site. Requires VS 11 Beta SDK 11 Free now, otherwise Part of MSDN Subscription Modeling Yes Visual Studio 11 Beta SDK Published 16.2.2012     Yes Visual Studio 11 Feedback tool1 Use this to really ease the process of sending bugs back to Microsoft. 1.1 Free as prerelase Visual Studio Yes             #1 Get via Visual Studio’s Tools | Extension Manager (or The Code Gallery). (From Adam : All these are auto updated by the Extension Manager in Visual Studio) #2 Works with ultimate only Components we wait for, not yet in a VS 11 version Product Notes Latest Version License Applicable to Microsoft       Coding Yes Inmeta Build Explorer     Free TFS integration No Build Manager Community Build Manager. Info here from Jakob   Free TFS Integration No Code Contracts Coming real soon   Free Coding & Quality Yes Code Contracts Editor Extensions     Free Coding & Quality Yes Web Std Update     Free Coding (Web) Yes (MSFT) Web Essentials     Free Coding (Web) Yes (MSFT) DotPeek It says up to .Net 4.0, but some tests indicates it seems to be able to handle 4.5. 1.0.0.7999 Free Coding/Investigation No Just Decompile Also says up to .net 4.0   Free Coding/Investigation No dotTrace     Licensed Quality No NDepend   Licensed Quality No tangible T4 editor     Lite version Free (Good enough) Coding (T4 templates) No Pex Moles are now integrated and improved in VS 11 as a new library called Fakes.     Coding & Unit Testing Yes Components which are now integrated into VS 11 Product Notes Productivity Power Tools Features integrated into VS11, with a few exceptions, I don’t think you will miss those. Fakes  Was Moles in 2010. Fakes is improved and made into a product.  NuGet Manager Included in the install, but still an extension package. Info here. Product installation, upgrades and patches for VS/TFS 11   Product Notes Date Applicable to Visual Studio 11 & TFS 11 Beta This is the beta release, and you are free to download and try it out. March 2012 Visual Studio and TFS SQL Server 2008 R2 SP1 Cumulative Update 4 The TFS 11 requires the CU1 at least, but you should go up to at least CU4, since this update solves a ghost record problem that otherwise may cause your TFS database to not release records the way it should when you clean it up, see this post for more information on that issue.  Oct 2011 SQL Server 2008 R2 SP1

    Read the article

  • Certify August Updates

    - by Sadia2
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We have added some release and platform certifications to MOS Certify. Applications : Oracle Demantra Demand Management 7.3.1.5, Oracle Demantra Predictive Trade Planning 7.3.1.5, Oracle Demantra Sales and Operations Planning 7.3.1.5 Database: Oracle Database Client 12.1.0.1.0 11.2.0.4.0, Oracle Clusterware 11.2.0.4.0, Oracle Database 11.2.0.4.0, Oracle Real Application Clusters 11.2.0.4.0 E-Business Suite: Oracle E-Business Suite 12.1.3, Oracle E-Business Suite 12.1.2, Oracle E-Business Suite 12.1.1, Oracle E-Business Suite 12.0.6, Oracle E-Business Suite 11.5.10.2 Edge Applications: Oracle Transportation Management 6.3.2 Enterprise Manager: Oracle Application Management Pack for Oracle E-Business Suite 12.1.0.1.0 Fusion Middleware: Discoverer Administrator 11.1.1.6.0, Discoverer Desktop 11.1.1.6.0, Forms Builder 11.1.1.6.0, Oracle Application Development Framework 11.1.1.6.0, Oracle Application Development Runtime 11.1.1.6.0, Oracle Business Intelligence Publisher 11.1.1.6.0, Oracle Directory Services Manager 11.1.1.6.0, Oracle Forms 11.1.1.6.0, Oracle GoldenGate 11.1.1.1.0, 11.1.1.1.2, 11.1.1.1.1, Oracle GoldenGate Application Adapters 11.1.1.1.1, Oracle Identity and Access Management 11.1.2.0.0, 11.1.2.1.0, Oracle Identity Federation 11.1.1.6.0, Oracle Real-Time Decision Load Generator 11.1.1.7.0, Oracle Real-Time Decision Studio 11.1.1.7.0, Oracle Real-Time Decisions 11.1.1.6.0, Oracle Reports 11.1.1.6.0, Oracle Segmentation Server 11.1.1.6.0, Oracle Virtual Directory 11.1.1.6.0, Oracle Web Cache 11.1.1.6.0, Oracle WebCenter Content Imaging 11.1.1.8.0, Oracle WebCenter Content Inbound Refinery Server 11.1.1.8.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Enterprise Capture 11.1.1.8.0, Oracle WebCenter Portal 11.1.1.8.0, Oracle WebCenter Sites 11.1.1.8.0, Oracle WebCenter Sites: CIP for EMC Documentum 11.1.1.8.0, Oracle WebCenter Sites: CIP for File Systems and MS SharePoint 11.1.1.8.0, Oracle WebCenter Sites: Community-Gadgets 11.1.1.8.0, Oracle WebCenter Sites: Explorer 11.1.1.8.0, Oracle WebCenter Universal Content Management 11.1.1.8.0, Reports Builder 11.1.1.6.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Sites: Developer Tools 11.1.1.8.0 FSGBU Insurance Group : Oracle Health Insurance Claims 2.13.3.0.0, 2.13.2.0.0, 2.13.1.0.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Tools 9.1.3.0, 9.1.2.0, 9.1.0.0 JD Edwards World: JD Edwards World Service Enablement A93SE, A931SE PeopleSoft: PeopleSoft PeopleTools 8.52 Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel CRM Desktop Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel HI Web Client 8.2.2.2.0, 8.1.1.9.0, Siebel Gateway Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Outlook Add-in Client 8.2.2.2.0, Siebel Remote Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Tools Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Web Server Extension 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Planning an Event&ndash;SPS NYC

    - by MOSSLover
    I bet some of you were wondering why I am not going to any events for the most part in June and July (aside from volunteering at SPS Chicago).  Well I basically have no life for the next 2 months.  We are approaching the 11th hour of SharePoint Saturday New York City.  The event is slated to have 350 to 400 attendees.  This is second year in a row I’ve helped run it with Jason Gallicchio.  It’s amazingly crazy how much effort this event requires versus Kansas City.  It’s literally 2x the volume of attendees, speakers, and sponsors plus don’t even get me started about volunteers.  So here is a bit of the break down… We have 30 volunteers+ that Tasha Scott from the Hampton Roads Area will be managing the day of the event to do things like timing the speakers, handing out food, making sure people don’t walk into the event that did not sign up until we get a count for fire code, registering people, watching the sharpees, watching the prizes, making sure attendees get to the right place,  opening and closing the partition in the big room, moving chairs, moving furniture, etc…Then there is Jason, Greg, and I who will be making sure that the speakers, sponsors, and everything is going smoothly in the background.  We need to make sure that everything is setup properly and in the right spot.  We also need to make sure signs are printed, schedules are created, bags are stuffed with sponsor material.  Plus we need to send out emails to sponsors reminding them to send us the right information to post on the site for charity sessions, send us boxes with material to stuff bags, and we need to make sure that Michael Lotter gets there information for invoicing.  We also need to check that Michael has invoiced everyone and who has paid, because we can’t order anything for the event without money.  Once people have paid we need to setup food orders, speaker and volunteer dinners, buy prizes, buy bags, buy speakers/volunteer/organizer shirts, etc…During this process we need all the abstracts from speakers, all the bios, pictures, shirt sizes, and other items so we can create schedules and order items.  We also need to keep track of who is attending the dinner the night before for volunteers and speakers and make sure we don’t hit capacity.  Then there is attendee tracking and making sure that we don’t hit too many attendees.  We need to make sure that attendees know where to go and what to do.  We have to make all kinds of random supply lists during this process and keep on track with a variety of lists and emails plus conference calls.  All in all it’s a lot of work and I am trying to keep track of it all the top so that we don’t duplicate anything or miss anything.  So basically all in all if you don’t see me around for another month don’t worry I do exist. Right now if you look at what I’m doing I am traveling every Monday morning and Thursday night back and forth to Washington DC from New Jersey.  Every night I am working on organizational stuff for SharePoint Saturday New York City.  Every Tuesday night we are running an event conference call.  Every weekend I am either with family or my boyfriend and cat trying hard not to touch the event.  So all my time is pretty much work, event, and family/boyfriend.  I have 0 bandwidth for anything in the community.  If you compound that with my severe allergy problems in DC and a doctor’s appointment every month plus a new med once a week I’m lucky I am still standing and walking.  So basically once July 30th hits hopefully Jason Gallicchio, Greg Hurlman, and myself will be able to breathe a little easier.  If I forget to do this thank you Greg and Jason.  Thank you Tom Daly.  Thank you Michael Lotter.  Thank you Tasha Scott.  Thank you Kevin Griffin.  Thank you all the volunteers, speakers, sponsors, and attendees who will and have made this event a success.  Hopefully, we have enough time until next year to regroup, recharge, and make the event grow bigger in a different venue.  Awesome job everyone we sole out within 3 days of registration and we still have several weeks to go.  Right now the waitlist is at 49 people with room to grow.  If you attend the event thank all these guys I mentioned above for making it possible.  It’s going to be awesome I know it but I probably won’t remember half of it due to the blur of things that we will all be taking care of the day of the event.  Catch you all in the end of July/Early August where I will attempt to post something useful and clever and possibly while wearing a fez. Technorati Tags: SPS NYC,SharePoint Saturday,SharePoint Saturday New York City

    Read the article

  • When does "proper" programming no longer matter?

    - by Kai Qing
    I've been a full time programmer for about 8 years now. Web based mostly, ranging in weird jobs for clients. Never anything I "want" to do. So my experience is limited to what I've been contracted to do, having no real incentive to master anything in particular. So here's my scenario and ultimately what I wonder about... I've been building an android game in my spare time. It's using the libgdx library so quite a bit of the heavy lifting is done for me. I don't read much of the docs cause unless it's in tutorial format I will just not care, and ultimately most of my questions have already been asked on stackoverflow. I get along fine and my game works as expected... Suspiciously well, even. So much so that I wonder why one should bother to be "proper" when coding if the end result is ultimately the same. To be more specific, I used a hashtable because I wanted something close to an associative array. Human readable key values. In other places to achieve similar things, I use a vector. I know libgdx has vector2 and vector3 classes, but I've never used them. When I come across weird problems and search stackoverflow for help, I see a lot of people just reaming the questions that use a certain datatype when another one is technically "proper." Like using an ArrayList because it does not require defined bounds versus re-defining an int[] with new known boundaries. Or even something trivial like this: for(int i = 0; i < items.length; i ++) { // do something } I know it evaluates item.length on every iteration. I just don't care. I know items will never be more than 15 to 20 items. So why bother caring if I evaluate items.length on every iteration? So I wonder - why does everyone get all up in arms over this? Who cares if I use a less efficient datatype to get the job done? I ran some tests to see how the app performs using the lazy, get it done fast and don't look back method I just described versus the proper, follow the tutorial and use the exact data types suggested by the community. The results: Same thing. Average 45 fps. I opened every app on the phone and galaxy tab. Same deal. No difference. My game is pretty graphic intensive. It's not like it's just a simple thing. I expected it to perform kind of badly since I don't care to optimize image assets or... well, you probably get the idea. I'm making the game for fun. As a joke, really. But in doing so I'm working outside the normal scope of my job, which is to always follow the rules and do it the right way. So to say, I am without bounds here and this has caused me to wonder why I ever really care to be "proper" So I guess my question to you is this: Is there a threshold when it no longer matters to be proper? Is there a lasting, longer term consequence to the lazy, get it done and don't look back route? Is it ok to say - "so long as it gets the job done, I don't care?" Disclaimer: When I program my game, I am almost always drunk. I do it to remember why I got into this stuff to begin with because the monotony of client based web work will make you hate being a programmer. I'm having a blast and my game is not crashing, tests well, performs well, looks good on all devices so far and has no noticeable negative impact on any of my testing devices. I expected failure because I was being so drunkenly careless with my code, but to my surprise, it had no noticeable impact. I am now starting to question the need to be careful. Help me regain the ability to care! ... or explain why it's not a bad thing to not care. Secondary disclaimer: I am aware of the benefits of maintainability. For myself and others. Agreed. But it's not like someone happening across my inefficient int[] loop won't know what it does. As an experienced programmer those kinds of things are just clear on sight. I document the complex stuff for myself knowing I was drunk and will probably need a reminder. Those notes would clarify any confusion for someone who might ever gaze upon my ridiculous game - though the reality is that either I maintain it myself or it fades into time. I'm ok with that. But if it doesn't slow the device down, or crash, then crossing the t's and dotting the i's might actually require more time than it's worth.

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Global Perspective: Oracle AppAdvantage Does its Stage Debut in the UK

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Global Perspective is a monthly series that brings experiences, business needs and real-world use cases from regions across the globe. This month’s feature is a follow-up from last month’s Global Perspective note from a well known ACE Director based in EMEA. My first contribution to this blog was before Oracle Open World and I was quite excited about where this initiative would take me in my understanding of the value of Oracle Fusion Middleware. Rimi Bewtra from the Oracle AppAdvantage team came as promised to the Oracle ACE Director briefings and explained what this initiative was all about and I then asked the directors to take part in the new survey. The story was really well received and then at the SOA advisory board that many of these ACE Directors already take part in there was a further discussion on how this initiative will help customers understand the benefits of adoption. A few days later Rick Beers launched the program at a lunch of invited customer executives which included one from Pella who talked about their projects (a quick recap on that here). I wasn’t able to stay for the whole event but what really interested me was that these executives who understood the technology but where looking for how they could use them to drive their businesses. Lots of ideas were bubbling up in my head about how we can use this in user groups to help our members, and the timing was fantastic as just three weeks later we had UKOUG_Apps13, our flagship Applications conference in the UK. We had independently working with Oracle marketing in the UK on an initiative called Apps Transformation to help our members look beyond just the application they use today. We have had a Fusion community page but felt the options open are now much wider than Fusion Applications, there are acquired applications, social, mobility and of course the underlying technology, Oracle Fusion Middleware. I was really pleased to be allowed to give the Oracle AppAdvantage story as a session in our conference and we are planning a special Apps Transformation event in March where I hope the Oracle AppAdvantage team will take part and we will have the results of the survey to discuss. But, life also came full circle for me. In my first post, I talked about Andrew Sutherland and his original theory that Oracle Fusion Middleware adoption had technical drivers. Well, Andrew was a speaker at our event and he gave a potted, tech-talk free update on Oracle Open World. Andrew talked about the Prevailing Technology Winds, and what is driving this today and he talked about that in the past it was the move from simply automating processes (ERP etc), through the altering of those processes (SOA) and onto consolidation. The next drivers are around the need to predict, both faster and more accurately; how to better exploit the information that we have available. He went on to talk about The Nexus of Forces: Social, Mobile, Cloud and Information – harnessing these forces of change with Oracle technology. Gartner really likes this concept and if you want to know more you can get their paper here. All this has made me think, and I hope it will make you too. Technology can help us drive our businesses better and understanding your needs can be the first step on your journey, which was the theme of our event in the UK. I spoke to a number of the delegates and I hope to share some of their stories in later posts. If you have a story to share, the survey is at: https://www.surveymonkey.com/s/P335DD3 About the Author: Debra Lilley, Fujitsu Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Debra has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Leader and then Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts Editor’s Note: Debra has kindly agreed to share her musings and experience in a monthly column on the Fusion Middleware blog so do stay tuned…

    Read the article

  • BPM 11g - Dynamic Task Assignment with Multi-level Organization Units

    - by Mark Foster
    I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process. Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit. The Use-Case Task assignment is to an approval group containing several users. At runtime, a location value in the input data determines which of the particular users the task is ultimately assigned to. In this case we use the Demo Community referenced in the SOA Admin Guide, and specifically the "LoanAnalyticGroup" which contains three users; "szweig", "mmitch" & "fkafka". In our scenario we would like to assign a task to "szweig" if the input data specifies that the location is "JapanCentral", to "fkafka" if the location is "JapanNorth" and to "mmitch" if "JapanSouth", and to all of them if the location is "Japan" i.e....   The Process Simple one human task process.... In the output data association of the "Start" activity we need to set the value of the "Organization Unit" predefined variable based on the input data (note that the  predefined variables can only be set on output data associations)....  ...and in the output data association of the human activity we will reset the "Organization Unit" to empty, always good practice to ensure that the Organization Unit will not be used for any subsequent human activities for which we do not require it.... Set Up the Organization Unit  Log in to the BPM Workspace with an administrator user (weblogic/welcome1 in our case) and choose the "Administration" option. Within "Roles" assign the "ProcessOwner" swim-lane for our process to "LoanAnalyticGroup".... Within "Organization Units" we can model our organization.... "Root Organization Unit" as "Japan" and "Child Organization Unit" as "Central", "South" & "North" as shown. As described previously, add user "szweig" to "Central", "mmitch" to "South" and "fkafka" to "North"....   Test the Process Invalid Data  First let us test with invalid data in the input to see what the consequences are, here we use "X" as input.... ...and looking at the instance we can see it has errored.... Organization Unit Root Level Assignment  Now let us see what happens if we have "Japan" in the input data.... ...looking in the "flow trace" we can see that the task has been assigned....  ... but who has the task been assigned to ? Let us look in the BPM Workspace for user "szweig"....  ...and for "mmitch"....  ... and for "fkafka"....  ...so we can see that with an Organization Unit at "Root" level we have successfully assigned the task to all users. Organization Unit Child Level Assignment  Now let us test with "Japan/North" in the input data.... ...and looking in "fkafka" workspace we see the task has been assigned, remember, he was associated with "JapanNorth"....   ... but what about the workspace of "szweig"....  ...no tasks assigned, neither has "mmitch", just as we expected. Summary  We have seen in this blog how to easily implement multi-level dynamic task routing using Organization Units, a common use-case and a simpler solution than Parametric Roles. 

    Read the article

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Application Development: Python or Java (or PHP)

    - by luckysmack
    I'm looking to get into application development, such as Facebook or Android apps and games. I am doing this for fun and to learn. Once my skills are to par I would like to have some side income from the apps, but I'm not banking on living off that (just so you know where I'm coming from and know what my end goals are). Currently I know and am familiar with PHP and frameworks such as cakephp and yii. However, I have been wanting to learn another language to broaden my horizons and to become a better developer. So I have narrowed it down to 2 languages. Python, and Java (I can already hear people cringing at the difference in the languages I have chosen, but I have some reasons). Python: closer to PHP that Java. Cross platformability. Also great as a general scripting language and has many file system level benefits that PHP does not. Cleaner syntax, readability, blah blah and the list goed on. Python will work great for cross platform apps and can be run on many OS's and is supported by Facebook for app development. But there is no support on Android (for full fledged apps). Java: a much stronger typed language, very robust community and corporate backing. Knowing Java is also good for personal marketability for enterprises, if you're into that. The main benefit here is that Java can write apps natively for Android and the apps can be ported for web versions to play on Facebook. So while I have seen many developers prefer Java over the two, Java has this significant advantage, where I can market my apps in both markets and in the future build more potential income. But like I said it is for fun. While money isn't the goal, it would still be nice. PHP: I'm putting this here because I know it already, and I'm sure a case could be made for it. It obviously works great for Facebook but like Python does not do so well on android. While it's mostly the realm of 'application development' that appeals to me, I do find Android apps fairly interesting and something that has a ton of potential to. But then again Facebook has a ton more users and the apps can also potentially be more immersive (desktop vs. mobile). So this is why I'm kinda stuck on what route to choose. Python for Facebook and web apps, with likely faster development to production times, or Java which can be developed for any of the platforms to make apps. Side note: I'm not really trying to get into 3D development, mostly 2D. And I also want to make an app with real-time play (websockets, etc). Someone mentioned node, js to me for that but Python seems to be more globally versatile for my goals. So, to anyone that does Facebook or Android development in either language: what do you suggest? Any input is valuable and I do appreciate it. And sorry for being long winded. EDIT: as mentioned in one of the answers, my primary goal is gaming. Although I do have some plans for non gaming apps such as general web based and desktop based ones. But gaming is my main goal with the possibility of income. EDIT: Another consideration could be Jython. Writing Python code which is converted into Java bytecode. This would allow the ability to do Android apps using Python. I could be wrong though, I'm still looking into it. Update 1-26-11: I recently acquired a new job which required I learn .NET using C#. Im sure some of you are cringing already but I really like the whole system and how it all works together between desktop and web development. But, as I am still interested in Python very much, and after some research I have decided I will learn Python as well as the IronPython implementation for .NET. But (again: I know...) since .NET is mostly a Windows thing and not as cross-compatible as I like, I will be learning Mono which is a cross platform implementation of .NET where I can use what I learn at work using C# and what I want to learn, Python/IronPython. So while learning and writing C#/.NET @ work I will be learning Python - Mono - Iron Python for what I want to do personally. And the benefit of them all being very closely related will help me out a lot, I think. What do you guys think? I almost feel like that should be another question, but there's not much of a question. Either way, you guys gave very helpful input.

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • "Yes, but that's niche."

    - by Geertjan
    JavaOne 2012 has come to an end though it feels like it hasn't even started yet! What happened, time is a weird thing. Too many things to report on. James Gosling's appearance at the JavaOne community keynote was seen, by everyone (which is quite a lot) of people I talked to, as the highlight of the conference. It was interesting that the software for the Duke's Choice Award winning Liquid Robotics that James Gosling is now part of and came to talk about is a Swing application that uses the WorldWind libraries. It was also interesting that James Gosling pointed out to the conference: "There are things you can't do using HTML." That brings me to the wonderful counter argument to the above, which I spend my time running into a lot: "Yes, but that's niche." It's a killer argument, i.e., it kills all discussions completely in one fell swoop. Kind of when you're talking about someone and then this sentence drops into the conversation: "Yes, but she's got cancer now." Here's one implementation of "Yes, but that's niche": Person A: All applications are moving to the web, tablet, and mobile phone. That's especially true now with HTML5, which is going to wipe away everything everywhere and all applications are going to be browser based. Person B: What about air traffic control applications? Will they run on mobile phones too? And do you see defence applications running in a browser? Don't you agree that there are multiple scenarios imaginable where the Java desktop is the optimal platform for running applications? Person A: Yes, but that's niche. Here's another implementation, though it contradicts the above [despite often being used by the same people], since JavaFX is a Java desktop technology: Person A: Swing is dead. Everyone is going to be using purely JavaFX and nothing else. Person B: Does JavaFX have a docking framework and a module system? Does it have a plugin system?  These are some of the absolutely basic requirements of Java desktop software once you get to high end systems, e.g., banks, defence force, oil/gas services. Those kinds of applications need a web browser and so they love the JavaFX WebView component and they also love the animated JavaFX charting components. But they need so much more than that, i.e., an application framework. Aren't there requirements that JavaFX isn't meeting since it is a UI toolkit, just like Swing is a UI toolkit, and what they have in common is their lack, i.e., natively, of any kind of application framework? Don't people need more than a single window and a monolithic application structure? Person A: Yes, but that's niche. In other words, anything that doesn't fit within the currently dominant philosophy is "niche", for no other reason than that it doesn't fit within the currently dominant philosophy... regardless of the actual needs of real developers. Saying "Yes, but that's niche", kills the discussion completely, because it relegates one side of the conversation to the arcane and irrelevant corners of the universe. You're kind of like Cobol now, as soon as "Yes, but that's niche" is said. What's worst about "Yes, but that's niche" is that it doesn't enter into any discussion about user requirements, i.e., there's so few that need this particular solution that we don't even need to talk about them anymore. Note, of course, that I'm not referring specifically or generically to anyone or anything in particular. Just picking up from conversations I've picked up on as I was scurrying around the Hilton's corridors while looking for the location of my next presentation over the past few days. It does, however, mean that there were people thinking "Yes, but that's niche" while listening to James Gosling pointing out that HTML is not the be-all and end-all of absolutely everything. And so this all leaves me wondering: How many applications must be part of a niche for the niche to no longer be a niche? And what if there are multiple small niches that have the same requirements? Don't all those small niches together form a larger whole, one that should be taken seriously, i.e., a whole that is not a niche?

    Read the article

  • Xobni Plus for Outlook [Review]

    - by The Geek
    Overview Xobni Plus is an addin that will bring a sidebar to Outlook which allows you to search through your inbox and contacts a lot easier. It provides the ability to search and keep track of your favorite social networks. Searching with Xobni is a lot more powerful than the default search feature in Outlook. It let’s you drill down your searches to conversations, email, links, and attachments. It now supports Outlook 2010 both 32 & 64-bit versions. Installation & Setup Installation is easy following the wizard. After completing the wizard you can tell you’re friends on Facebook and Twitter that you are now using it. You can also decide to join their Product Improvement Program if you want. After installation when you open Outlook, Xobni appears as a sidebar on the right side. Don’t worry about it always being in the way, as you can hide it if you need more room for other Outlook functions. After Xobni free is installed, you can upgrade to the Plus version at any time. A new window will open up and you can use your Credit Card, PayPal, or redeem a code if you have one. Features & Use Where to begin with the amount of features available in Xobni Plus? It really has an amazing amount of cool features. Of course you’ll have all of the features of the Free Version which we previously covered…and a lot more. After Xobni is installed you’ll notice a section for it on the Ribbon. From here you can search Xobni, show or hide the Sidebar, and change other options. It allows you to easily keep up with various social networks like Facebook, Twitter, and LinkedIn… Check out email analytics and contact ranks. Click on the Files Exchanged tab to search for specific attachments. Quickly search links exchanged with your contacts. Hover over a link to get a preview of what it entails. It gives you the ability to index all of your Yahoo mail as well, without the need for purchasing Yahoo Plus! Then your Yahoo messages appear in the Xobni sidebar. When you select a contact you can see related messages from you Yahoo account. Easily index all of your mail…including Yahoo mail for better organization and faster search results. There are several options you can select to change the way Xobni works. From setting up your Yahoo email, Indexing options, and much more. Additional Features of Xobni Plus Advanced Search Capabilities – Filter results, Boolean & Phrase Search, Ability to search Appointments & Tasks, Advanced Search Builder Search unlimited PST data files Xobni contacts in the compose screen Find links exchanged with your contacts View calendar appointments One year premium tech support No Ads! Performance We ran Xobni Plus on Outlook 2010 32-bit on a Dual-Core AMD Athlon system with 4GB of RAM and found it to run quite smoothly. However, we did notice it would sometimes slow down launching Outlook, especially if other apps are running at the same time. Product Support When you buy a license for Xobni Plus you get a full year of premium tech support. They provide a Questions and Answers page on their site where you can run a search query and answers appear instantly. You can contact support directly as a Plus member through their web form and they advise the turn around time is 2 business days. However, when we tested it, we received a response within 24 hours. They also provide FAQ, Community forum, and you can download the Owners Manual in PDF format from the support page. Conclusion Xobni Plus is a very powerful addin for Outlook and includes a lot more features that we didn’t cover in this review. You can download Xobni free edition which includes an 8 day free trial of the Plus version. This provides a good way to start getting familiar with it. Then upgrade to Xobni Plus at any time for $29.95. Once you get started, you’ll find the sidebar is nicely laid out and intuitive to use. If you live out of Outlook during the day, Xobni Plus is a great addition for fast and powerful searches. It provides an easy way to keep all of your contacts and messages well organized and easy to find. Xobni Plus works with XP, Vista, and Windows 7 (32 & 64-bit editions) Outlook 2003, 2007 and both 32 & 64-bit editions of Outlook 2010. Download Xobni Plus Download Xobni Free Edition Rating Installation: 8 Ease of Use: 8 Features: 9 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Xobni Free Powers Up Outlook’s Search and ContactsCreate an Email Template in Outlook 2003Add Social Elements to Your Gmail Contacts with RapportiveChange Outlook Startup FolderClear Outlook Searches and MRU (Most Recently Used) Lists TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • Silverlight Cream Monday WP7 App Review # 2

    - by Dave Campbell
    Today's Review (alphabetic order): GooNews, Grocery Shopping List, Need for Speed, SurfCube, and United Nations News. I'm a day late if these are going to be 'Monday' posts, but there are lots of apps, lots of goodness, and lots of email, so I might try to do 2 a week, we'll see. So once again I've got a small review of 5 apps that are either on my phone or have been. Disclaimers at the end. In this Issue:   GooNews is a very cool app from Shawn Wildermuth (AgiliTrain). I don't know if he uses this as a demo during his instruction, but it definitely serves a purpose... wanna pick up the top news items from Google on a never-ending basis? ... this is it. You can add your own keyword searches, and send stories to InstaPaper or share via email. I like this because it brings me the news quickly and updated, and works great. GooNews is by AgiliTrain and is Free This was a request by the author, and actually surprised me. I'm a big one for lists, but I would have just done a OneNote list to SkyDrive and to my phone. This app is a lot more than that, but will take you some setup to make it be 'yours'. For obvious reasons, there are no unit prices on things, so you have to set that up to get some idea of the cost of what you're shopping for. But if you do that, you'll get a nice total. Lots of thought went into the various categories and you can add your own. There's a bit of animation on the category selection that's nice. He seems to have covered all the bases necessary to use this, even shopping 'plans' that can be saved, and emailing of lists. As I said, I'm more of a raw list person, but if you take the time to set this up, it should work very nicely for you. Grocery Shopping List is by Grocery Shopper and is $0.99 ($1.99 after Feb 1) with a free trial. This was my 2nd commercial game I bought, and the one I've played the most. I ran the trial, thought it worked great, and bought it. I've had a lot of fun with this... there's no gas pedal.. your foot is in the carbeurator from the GO!, and unless you wanna tap the screen and brake like a little girl, just hang onto the steering wheel (the phone), and guide your way through. Hours of fun and challenges here. I like this because it's got some challenge to it, and the cars seem to be very realistic in their reactions. Need for Speed Undercover is by Electronic Arts is $4.99 and has a free trial. SurfCube Browser is another app by the folks that did the GuitarTuner I reviewed on Monday. You have to see SurfCube to believe it. You've probably seen the YouTube video, if not check SilverlightCream number 1017. The app works very solid, and just as the video demonstrates. I downloaded and tried this, and it immediately did 2 things: bought it, and pinned it to my start page. I like this because it's fun to work with, and it works great as a browser. I'm about *this* close to replacing the IE tile on my front page with SurfCube. SurfCube Browser is by Kinabalu Innovation Limited and is $1.99 and has a free trial. Coming in with another News app is United Nations News by Justin Angel. This is definitely a news aggregator for 'grown ups'... news, photos, videos, and radio broadcsts from the international community all in one very slick app. This is an amazingly well thought-out and complete app. Even better yet, Justin has the code on CodePlex. A very well-done International news aggregator. United Nations News is by Justin Angel and is Free. A few disclaimers: Feel free to write me about your app and tell me about it. While it would be very cool to receive a whole bunch of xap files to review, at this point, for technical reasons, I'm unable to side-load my device. Since I plan on only doing this one day a week (twice if I find time), and only 5, I may never get caught up, so if you send me some info, be patient. Re: games ... remember I'm old... I'm from the era of Colossal Cave and Zork. Duke-Nukem 2D and Captain Comic were awesome. I don't own an XBOX or any other game system, so take game reviews from my perspective -- who knows, it may be refreshing :) I won't pay for an app or game just to try it. If you expect me to test-drive your app, it's going to have to have a Free Trial. I'm still playing with the format, comments are welcome. I decided I should alphabetize the list today... so there's no order implied Let me know what you think of the idea of doing reviews, or the layout/whatever, and Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • PASS: Bylaw Change 2013

    - by Bill Graziano
    PASS launched a Global Growth Initiative in the Summer of 2011 with the appointment of three international Board advisors.  Since then we’ve thought and talked extensively about how we make PASS more relevant to our members outside the US and Canada.  We’ve collected much of that discussion in our Global Growth site.  You can find vision documents, plans, governance proposals, feedback sites, and transcripts of Twitter chats and town hall meetings.  We also address these plans at the Board Q&A during the 2012 Summit. One of the biggest changes coming out of this process is around how we elect Board members.  And that requires a change to the bylaws.  We published the proposed bylaw changes as a red-lined document so you can clearly see the changes.  Our goal in these bylaw changes was to address the changes required by the global growth initiatives, conduct a legal review of the document and address other minor issues in the document.  There are numerous small wording changes throughout the document.  For example, we replaced every reference of “The Corporation” with the word “PASS” so it now reads “PASS is organized…”. Board Composition The biggest change in these bylaw changes is how the Board is composed and elected.  This discussion starts in section VI.2.  This section now says that some elected directors will come from geographic regions.  I think this is the best way to make sure we give all of our members a voice in the leadership of the organization.  The key parts of this section are: The remaining Directors (i.e. the non-Officer Directors and non-Vendor Appointed Directors) shall be elected by the voting membership (“Elected Directors”). Elected Directors shall include representatives of defined PASS regions (“Regions”) as set forth below (“Regional Directors”) and at minimum one (1) additional Director-at-Large whose selection is not limited by region. Regional Directors shall include, but are not limited to, two (2) seats for the Region covering Canada and the United States of America. Additional Regions for the purpose of electing additional Regional Directors and additional Director-at-Large seats for the purpose of expanding the Board shall be defined by a majority vote of the current Board of Directors and must be established prior to the public call for nominations in the general election. Previously defined Regions and seats approved by the Board of Directors shall remain in effect and can only be modified by a 2/3 majority vote by the then current Board of Directors. Currently PASS has six At-Large Directors elected by the members.  These changes allow for a Regional Director position that is elected by the members but must come from a particular region.  It also stipulates that there must always be at least one Director-at-Large who can come from any region. We also understand that PASS is currently a very US-centric organization.  Our Summit is held in America, roughly half our chapters are in the US and Canada and most of the Board members over the last ten years have come from America.  We wanted to reflect that by making sure that our US and Canadian volunteers would continue to play a significant role by ensuring that two Regional seats are reserved specifically for Canada and the US. Other than that, the bylaws don’t create any specific regional seats.  These rules allow us to create Regional Director seats but don’t require it.  We haven’t fully discussed what the criteria will be in order for a region to have a seat designated for it or how many regions there will be.  In our discussions we’ve broadly discussed regions for United States and Canada Europe, Middle East, and Africa (EMEA) Australia, New Zealand and Asia (also known as Asia Pacific or APAC) Mexico, South America, and Central America (LATAM) As you can see, our thinking is that there will be a few large regions.  I’ve also considered a non-North America region that we can gradually split into the regions above as our membership grows in those areas.  The regions will be defined by a policy document that will be published prior to the elections. I’m hoping that over the next year we can begin to publish more of what we do as Board-approved policy documents. While the bylaws only require a single non-region specific At-large Director, I would expect we would always have two.  That way we can have one in each election.  I think it’s important that we always have one seat open that anyone who is eligible to run for the Board can contest.  The Board is required to have any regions defined prior to the start of the election process. Board Elections – Regional Seats We spent a lot of time discussing how the elections would work for these Regional Director seats.  Ultimately we decided that the simplest solution is that every PASS member should vote for every open seat.  Section VIII.3 reads: Candidates who are eligible (i.e. eligible to serve in such capacity subject to the criteria set forth herein or adopted by the Board of Directors) shall be designated to fill open Board seats in the following order of priority on the basis of total votes received: (i) full term Regional Director seats, (ii) full term Director-at-Large seats, (iii) not full term (vacated) Regional Director seats, (iv) not full term (vacated) Director-at-Large seats. For the purposes of clarity, because of eligibility requirements, it is contemplated that the candidates designated to the open Board seats may not receive more votes than certain other candidates who are not selected to the Board. We debated whether to have multiple ballots or one single ballot.  Multiple ballot elections get complicated quickly.  Let’s say we have a ballot for US/Canada and one for Region 2.  After that we’d need a mechanism to merge those two together and come up with the winner of the at-large seat or have another election for the at-large position.  We think the best way to do this is a single ballot and putting the highest vote getters into the most restrictive seats.  Let’s look at an example: There are seats open for Region 1, Region 2 and at-large.  The election results are as follows: Candidate A (eligible for Region 1) – 550 votes Candidate B (eligible for Region 1) – 525 votes Candidate C (eligible for Region 1) – 475 votes Candidate D (eligible for Region 2) – 125 votes Candidate E (eligible for Region 2) – 75 votes In this case, Candidate A is the winner for Region 1 and is assigned that seat.  Candidate D is the winner for Region 2 and is assigned that seat.  The at-large seat is filled by the high remaining vote getter which is Candidate B. The key point to understand is that we may have a situation where a person with a lower vote total is elected to a regional seat and a person with a higher vote total is excluded.  This will be true whether we had multiple ballots or a single ballot.  Board Elections – Vacant Seats The other change to the election process is for vacant Board seats.  The actual changes are sprinkled throughout the document. Previously we didn’t have a mechanism that allowed for an election of a Board seat that we knew would be vacant in the future.  The most common case is when a Board members moves to an Officer role in the middle of their term.  One of the key changes is to allow the number of votes members have to match the number of open seats.  This allows each voter to express their preference on all open seats.  This only applies when we know about the opening prior to the call for nominations.  This all means that if there’s a seat will be open at the start of the next Board term, and we know about it prior to the call for nominations, we can include that seat in the elections.  Ultimately, the aim is to have PASS members decide who sits on the Board in as many situations as possible. We discussed the option of changing the bylaws to just take next highest vote-getter in all other cases.  I think that’s wrong for the following reasons: All voters aren’t able to express an opinion on all candidates.  If there are five people running for three seats, you can only vote for three.  You have no way to express your preference between #4 and #5. Different candidates may have different information about the number of seats available.  A person may learn that a Board member plans to resign at the end of the year prior to that information being made public. They may understand that the top four vote getters will end up on the Board while the rest of the members believe there are only three openings.  This may affect someone’s decision to run.  I don’t think this creates a transparent, fair election. Board members may use their knowledge of the election results to decide whether to remain on the Board or not.  Admittedly this one is unlikely but I don’t want to create a situation where this accusation can be leveled. I think the majority of vacancies in the future will be handled through elections.  The bylaw section quoted above also indicates that partial term vacancies will be filled after the full term seats are filled. Removing Directors Section VI.7 on removing directors has always had a clause that allowed members to remove an elected director.  We also had a clause that allowed appointed directors to be removed.  We added a clause that allows the Board to remove for cause any director with a 2/3 majority vote.  The updated text reads: Any Director may be removed for cause by a 2/3 majority vote of the Board of Directors whenever in its judgment the best interests of PASS would be served thereby. Notwithstanding the foregoing, the authority of any Director to act as in an official capacity as a Director or Officer of PASS may be suspended by the Board of Directors for cause. Cause for suspension or removal of a Director shall include but not be limited to failure to meet any Board-approved performance expectations or the presence of a reason for suspension or dismissal as listed in Addendum B of these Bylaws. The first paragraph is updated and the second and third are unchanged (except cleaning up language).  If you scroll down and look at Addendum B of these bylaws you find the following: Cause for suspension or dismissal of a member of the Board of Directors may include: Inability to attend Board meetings on a regular basis. Inability or unwillingness to act in a capacity designated by the Board of Directors. Failure to fulfill the responsibilities of the office. Inability to represent the Region elected to represent Failure to act in a manner consistent with PASS's Bylaws and/or policies. Misrepresentation of responsibility and/or authority. Misrepresentation of PASS. Unresolved conflict of interests with Board responsibilities. Breach of confidentiality. The bold line about your inability to represent your region is what we added to the bylaws in this revision.  We also added a clause to section VII.3 allowing the Board to remove an officer.  That clause is much less restrictive.  It doesn’t require cause and only requires a simple majority. The Board of Directors may remove any Officer whenever in their judgment the best interests of PASS shall be served by such removal. Other There are numerous other small changes throughout the document. Proxy voting.  The laws around how members and Board members proxy votes are specific in Illinois law.  PASS is an Illinois corporation and is subject to Illinois laws.  We changed section IV.5 to come into compliance with those laws.  Specifically this says you can only vote through a proxy if you have a written proxy through your authorized attorney.  English language proficiency.  As we increase our global footprint we come across more members that aren’t native English speakers.  The business of PASS is conducted in English and it’s important that our Board members speak English.  If we get big enough to afford translators, we may be able to relax this but right now we need English language skills for effective Board members. Committees.  The language around committees in section IX is old and dated.  Our lawyers advised us to clean it up.  This section specifically applies to any committees that the Board may form outside of portfolios.  We removed the term limits, quorum and vacancies clause.  We don’t currently have any committees that this would apply to.  The Nominating Committee is covered elsewhere in the bylaws. Electronic Votes.  The change allows the Board to vote via email but the results must be unanimous.  This is to conform with Illinois state law. Immediate Past President.  There was no mechanism to fill the IPP role if an outgoing President chose not to participate.  We changed section VII.8 to allow the Board to invite any previous President to fill the role by majority vote. Nominations Committee.  We’ve opened the language to allow for the transparent election of the Nominations Committee as outlined by the 2011 Election Review Committee. Revocation of Charters. The language surrounding the revocation of charters for local groups was flagged by the lawyers. We have allowed for the local user group to make all necessary payment before considering returning of items to PASS if required. Bylaw notification. We’ve spent countless meetings working on these bylaws with the intent to not open them again any time in the near future. Should the bylaws be opened again, we have included a clause ensuring that the PASS membership is involved. I’m proud that the Board has remained committed to transparency and accountability to members. This clause will require that same level of commitment in the future even when all the current Board members have rolled off. I think that covers everything.  I’d encourage you to look through the red-line document and see the changes.  It’s helpful to look at the language that’s being removed and the language that’s being added.  I’m happy to answer any questions here or you can email them to [email protected].

    Read the article

  • Session Report - Java on the Raspberry Pi

    - by Janice J. Heiss
    On mid-day Wednesday, the always colorful Oracle Evangelist Simon Ritter demonstrated Java on the Raspberry Pi at his session, “Do You Like Coffee with Your Dessert?”. The Raspberry Pi consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there is a single feature that makes the Raspberry Pi significant,” observed Ritter, “but a combination of things really makes it stand out. First, it's $35 for what is effectively a completely usable computer. You do have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM (Advanced RISC Machine and Acorn RISC Machine) processor is noteworthy, because it avoids problems like cooling (no heat sink or fan) and can use a USB power brick. When you add in the enormous community support, it offers a great platform for teaching everyone about computing.”Some 200 enthusiastic attendees were present at the session which had the feel of Simon Ritter sharing a fun toy with friends. The main point of the session was to show what Oracle was doing to support Java on the Raspberry Pi in a way that is entertaining and fun. Ritter pointed out that, in addition to being great for teaching, it’s an excellent introduction to the ARM architecture, and runs well with Java and will get better once it has official hard float support. The possibilities are vast.Ritter explained that the Raspberry Pi Project started in 2006 with the goal of devising a computer to inspire children; it drew inspiration from the BBC Micro literacy project of 1981 that produced a series of microcomputers created by the Acorn Computer company. It was officially launched on February 29, 2012, with a first production of 10,000 boards. There were 100,000 pre-orders in one day; currently about 4,000 boards are produced a day. Ritter described the specification as follows:* CPU: ARM 11 core running at 700MHz Broadcom SoC package Can now be overclocked to 1GHz (without breaking the warranty!) * Memory: 256Mb* I/O: HDMI and composite video 2 x USB ports (Model B only) Ethernet (Model B only) Header pins for GPIO, UART, SPI and I2C He took attendees through a brief history of ARM Architecture:* Acorn BBC Micro (6502 based) Not powerful enough for Acorn’s plans for a business computer * Berkeley RISC Project UNIX kernel only used 30% of instruction set of Motorola 68000 More registers, less instructions (Register windows) One chip architecture to come from this was… SPARC * Acorn RISC Machine (ARM) 32-bit data, 26-bit address space, 27 registers First machine was Acorn Archimedes * Spin off from Acorn, Advanced RISC MachinesNext he presented its features:* 32-bit RISC Architecture–  ARM accounts for 75% of embedded 32-bit CPUs today– 6.1 Billion chips sold last year (zero manufactured by ARM)* Abstract architecture and microprocessor core designs– Raspberry Pi is ARM11 using ARMv6 instruction set* Low power consumption– Good for mobile devices– Raspberry Pi can be powered from 700mA 5V only PSU– Raspberry Pi does not require heatsink or fanHe described the current ARM Technology:* ARMv6– ARM 11, ARM Cortex-M* ARMv7– ARM Cortex-A, ARM Cortex-M, ARM Cortex-R* ARMv8 (Announced)– Will support 64-bit data and addressingHe next gave the Java Specifics for ARM: Floating point operations* Despite being an ARMv6 processor it does include an FPU– FPU only became standard as of ARMv7* FPU (Hard Float, or HF) is much faster than a software library* Linux distros and Oracle JVM for ARM assume no HF on ARMv6– Need special build of both– Raspbian distro build now available– Oracle JVM is in the works, release date TBDNot So RISCPerformance Improvements* DSP Enhancements* Jazelle* Thumb / Thumb2 / ThumbEE* Floating Point (VFP)* NEON* Security Enhancements (TrustZone)He spent a few minutes going over the challenges of using Java on the Raspberry Pi and covered:* Sound* Vision * Serial (TTL UART)* USB* GPIOTo implement sound with Java he pointed out:* Sound drivers are now included in new distros* Java Sound API– Remember to add audio to user’s groups– Some bits work, others not so much* Playing (the right format) WAV file works* Using MIDI hangs trying to open a synthesizer* FreeTTS text-to-speech– Should work once sound works properlyHe turned to JavaFX on the Raspberry Pi:* Currently internal builds only– Will be released as technology preview soon* Work involves optimal implementation of Prism graphics engine– X11?* Once the JavaFX implementation is completed there will be little of concern to developers-- It’s just Java (WORA). He explained the basis of the Serial Port:* UART provides TTL level signals (3.3V)* RS-232 uses 12V signals* Use MAX3232 chip to convert* Use this for access to serial consoleHe summarized his key points. The Raspberry Pi is a very cool (and cheap) computer that is great for teaching, a great introduction to ARM that works very well with Java and will work better in the future. The opportunities are limitless. For further info, check out, Raspberry Pi User Guide by Eben Upton and Gareth Halfacree. From there, Ritter tried out several fun demos, some of which worked better than others, but all of which were greeted with considerable enthusiasm and support and good humor (even when he ran into some glitches).  All in all, this was a fun and lively session.

    Read the article

  • Loading XML file containing leading zeros with SSIS preserving the zeros

    - by Compudicted
    Visiting the MSDN SQL Server Integration Services Forum oftentimes I could see that people would pop up asking this question: “why I am not able to load an element from an XML file that contains zeros so the leading/trailing zeros would remain intact?”. I started to suspect that such a trivial and often-required operation perhaps is being misunderstood by the developer community. I would also like to add that the whole state of affairs surrounding the XML today is probably also going to be increasingly affected by a motion of people who dislike XML in general and many aspects of it as XSD and XSLT invoke a negative reaction at best. Nevertheless, XML is in wide use today and its importance as a bridge between diverse systems is ever increasing. Therefore, I deiced to write up an example of loading an arbitrary XML file that contains leading zeros in one of its elements using SSIS so the leading zeros would be preserved keeping in mind the goal on simplicity into a table in SQL Server database. To start off bring up your BIDS (running as admin) and add a new Data Flow Task (DFT). This DFT will serve as container to adding our XML processing elements (besides, the XML Source is not available anywhere else other than from within the DFT). Double-click your DFT and drag and drop the XMS Source component from the Tool Box’s Data Flow Sources. Now, let the fun begin! Being inspired by the upcoming Christmas I created a simple XML file with one set of data that contains an imaginary SSN number of Rudolph containing several leading zeros like 0000003. This file can be viewed here. To configure the XML Source of course it is quite intuitive to point it to our XML file, next what the XML source needs is either an embedded schema (XSD) or it can generate one for us. In lack of the one I opted to auto-generate it for me and I ended up with an XSD that looked like: <?xml version="1.0"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="XMasEvent"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="CaseInfo"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="ID" type="xs:unsignedByte" /> <xs:element minOccurs="0" name="CreatedDate" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="LastName" type="xs:string" /> <xs:element minOccurs="0" name="FirstName" type="xs:string" /> <xs:element minOccurs="0" name="SSN" type="xs:unsignedByte" /> <!-- Becomes string -- > <xs:element minOccurs="0" name="DOB" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="Event" type="xs:string" /> <xs:element minOccurs="0" name="ClosedDate" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> As an aside on the XML file: if your XML file does not contain the outer node (<XMasEvent>) then you may end up in a situation where you see just one field in the output. Now please note that the SSN element’s data type was chosen to be of unsignedByte (and this is for a reason). The reason is stemming from the fact all our figures in the element are digits, this is good, but this is not exactly what we need, because if we will attempt to load the data with this XSD then we are going to either get errors on the destination or most typically lose the leading zeros. So the next intuitive choice is to change the data type to string. Besides, if a SSIS package was already created based on this XSD and the data type change was done thereafter, one should re-set the metadata by right-clicking the XML Source and choosing “Advanced Editor” in which there is a refresh button at the bottom left which will do the trick. So far so good, we are ready to load our XML file, well actually yes, and no, in my experience typically some data conversion may be required. So depending on your data destination you may need to tweak the data types targeted. Let’s add a Data Conversion Task to our DFT. Your package should look like: To make the story short I only will cover the SSN field, so in my data source the target SQL Table has it as nchar(10) and we chose string in our XSD (yes, this is a big difference), under such circumstances the SSIS will complain. So will go and manipulate on the data type of SSN by making it Unicode String (DT_WSTR), World String per se. The conversion should look like: The peek at the Metadata: We are almost there, now all we need is to configure the destination. For simplicity I chose SQL Server Destination. The mapping is a breeze, F5 and I am able to insert my data into SQL Server now! Checking the zeros – they are all intact!

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >