Search Results

Search found 17047 results on 682 pages for 'architecture design patt'.

Page 650/682 | < Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • OIM 11g - Multi Valued attribute reconciliation of a child form

    - by user604275
    This topic gives a brief description on how we can do reconciliation of a child form attribute which is also multi valued from a flat file . The format of the flat file is (an example): ManagementDomain1|Entitlement1|DIRECTORY SERVER,EMAIL ManagementDomain2|Entitlement2|EMAIL PROVIDER INSTANCE - UMS,EMAIL VERIFICATION In OIM there will be a parent form for fields Management domain and Entitlement.Reconciliation will assign Servers ( which are multi valued) to corresponding Management  Domain and Entitlement .In the flat file , multi valued fields are seperated by comma(,). In the design console, Create a form with 'Server Name' as a field and make it a child form . Open the corresponding Resource Object and add this field for reconcilitaion.While adding , choose 'Multivalued' check box. (please find attached screen shot on how to add it , Child Table.docx) Open process definiton and add child form fields for recociliation. Please click on the 'Create Reconcilitaion Profile' buttton on the resource object tab. The API methods used for child form reconciliation are : 1.           reconEventKey =   reconOpsIntf.createReconciliationEvent(resObjName, reconData,                                                            false); ·                                    ‘False’  here tells that we are creating the recon for a child table . 2.               2.       reconOpsIntf.providingAllMultiAttributeData(reconEventKey, RECON_FIELD_IN_RO, true);                RECON_FIELD_IN_RO is the field that we added in the Resource Object while adding for reconciliation, please refer the screen shot) 3.    reconOpsIntf.addDirectBulkMultiAttributeData(reconEventKey,RECON_FIELD_IN_RO, bulkChildDataMapList);                 bulkChildDataMapList  is coded as below :                 List<Map> bulkChildDataMapList = new ArrayList<Map>();                   for (int i = 0; i < stokens.length; i++) {                            Map<String, String> attributeMap = new HashMap<String, String>();                           String serverName = stokens[i].toUpperCase();                           attributeMap.put("Server Name", stokens[i]);                           bulkChildDataMapList.add(attributeMap);                         } 4                  4.       reconOpsIntf.finishReconciliationEvent(reconEventKey); 5.       reconOpsIntf.processReconciliationEvent(reconEventKey); Now, we have to register the plug-in, import metadata into MDS and then create a scheduled job to execute which will run the reconciliation.

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Navigation in a #WP7 application with MVVM Light

    - by Laurent Bugnion
    In MVVM applications, it can be a bit of a challenge to send instructions to the view (for example a page) from a viewmodel. Thankfully, we have good tools at our disposal to help with that. In his excellent series “MVVM Light Toolkit soup to nuts”, Jesse Liberty proposes one approach using the MVVM Light messaging infrastructure. While this works fine, I would like to show here another approach using what I call a “view service”, i.e. an abstracted service that is invoked from the viewmodel, and implemented on the view. Multiple kinds of view services In fact, I use view services quite often, and even started standardizing them for the Windows Phone 7 applications I work on. If there is interest, I will be happy to show other such view services, for example Animation services, responsible to start/stop animations on the view. Dialog service, in charge of displaying messages to the user and gathering feedback. Navigation service, in charge of navigating to a given page directly from the viewmodel. In this article, I will concentrate on the navigation service. The INavigationService interface In most WP7 apps, the navigation service is used in quite a straightforward way. We want to: Navigate to a given URI. Go back. Be notified when a navigation is taking place, and be able to cancel. The INavigationService interface is quite simple indeed: public interface INavigationService { event NavigatingCancelEventHandler Navigating; void NavigateTo(Uri pageUri); void GoBack(); } Obviously, this interface can be extended if necessary, but in most of the apps I worked on, I found that this covers my needs. The NavigationService class It is possible to nicely pack the navigation service into its own class. To do this, we need to remember that all the PhoneApplicationPage instances use the same instance of the navigation service, exposed through their NavigationService property. In fact, in a WP7 application, it is the main frame (RootFrame, of type PhoneApplicationFrame) that is responsible for this task. So, our implementation of the NavigationService class can leverage this. First the class will grab the PhoneApplicationFrame and store a reference to it. Also, it registers a handler for the Navigating event, and forwards the event to the listening viewmodels (if any). Then, the NavigateTo and the GoBack methods are implemented. They are quite simple, because they are in fact just a gateway to the PhoneApplicationFrame. The whole class is as follows: public class NavigationService : INavigationService { private PhoneApplicationFrame _mainFrame; public event NavigatingCancelEventHandler Navigating; public void NavigateTo(Uri pageUri) { if (EnsureMainFrame()) { _mainFrame.Navigate(pageUri); } } public void GoBack() { if (EnsureMainFrame() && _mainFrame.CanGoBack) { _mainFrame.GoBack(); } } private bool EnsureMainFrame() { if (_mainFrame != null) { return true; } _mainFrame = Application.Current.RootVisual as PhoneApplicationFrame; if (_mainFrame != null) { // Could be null if the app runs inside a design tool _mainFrame.Navigating += (s, e) => { if (Navigating != null) { Navigating(s, e); } }; return true; } return false; } } Exposing URIs I find that it is a good practice to expose each page’s URI as a constant. In MVVM Light applications, a good place to do that is the ViewModelLocator, which already acts like a central point of setup for the views and their viewmodels. Note that in some cases, it is necessary to expose the URL as a string, for instance when a query string needs to be passed to the view. So for example we could have: public static readonly Uri MainPageUri = new Uri("/MainPage.xaml", UriKind.Relative); public const string AnotherPageUrl = "/AnotherPage.xaml?param1={0}&param2={1}"; Creating and using the NavigationService Normally, we only need one instance of the NavigationService class. In cases where you use an IOC container, it is easy to simply register a singleton instance. For example, I am using a modified version of a super simple IOC container, and so I can register the navigation service as follows: SimpleIoc.Register<INavigationService, NavigationService>(); Then, it can be resolved where needed with: SimpleIoc.Resolve<INavigationService>(); Or (more frequently), I simply declare a parameter on the viewmodel constructor of type INavigationService and let the IOC container do its magic and inject the instance of the NavigationService when the viewmodel is created. On supported platforms (for example Silverlight 4), it is also possible to use MEF. Or, of course, we can simply instantiate the NavigationService in the ViewModelLocator, and pass this instance as a parameter of the viewmodels’ constructor, injected as a property, etc… Once the instance has been passed to the viewmodel, it can be used, for example with: NavigationService.NavigateTo(ViewModelLocator.ComparisonPageUri); Testing Thanks to the INavigationService interface, navigation can be mocked and tested when the viewmodel is put under unit test. Simply implement and inject a mock class, and assert that the methods are called as they should by the viewmodel. Conclusion As usual, there are multiple ways to code a solution answering your needs. I find that view services are a really neat way to delegate view-specific responsibilities such as animation, dialogs and of course navigation to other classes through an abstracted interface. In some cases, such as the NavigationService class exposed here, it is even possible to standardize the implementation and pack it in a class library for reuse. I hope that this sample is useful! Happy coding. Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Collaborative Whiteboard using WebSocket in GlassFish 4 - Text/JSON and Binary/ArrayBuffer Data Transfer (TOTD #189)

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) as its integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket One of the typical usecase for WebSocket is online collaborative games. This Tip Of The Day (TOTD) explains a sample that can be used to build such games easily. The application is a collaborative whiteboard where different shapes can be drawn in multiple colors. The shapes drawn on one browser are automatically drawn on all other peer browsers that are connected to the same endpoint. The shape, color, and coordinates of the image are transfered using a JSON structure. A browser may opt-out of sharing the figures. Alternatively any browser can send a snapshot of their existing whiteboard to all other browsers. Take a look at this video to understand how the application work and the underlying code. The complete sample code can be downloaded here. The code behind the application is also explained below. The web page (index.jsp) has a HTML5 Canvas as shown: <canvas id="myCanvas" width="150" height="150" style="border:1px solid #000000;"></canvas> And some radio buttons to choose the color and shape. By default, the shape, color, and coordinates of any figure drawn on the canvas are put in a JSON structure and sent as a message to the WebSocket endpoint. The JSON structure looks like: { "shape": "square", "color": "#FF0000", "coords": { "x": 31.59999942779541, "y": 49.91999053955078 }} The endpoint definition looks like: @WebSocketEndpoint(value = "websocket",encoders = {FigureDecoderEncoder.class},decoders = {FigureDecoderEncoder.class})public class Whiteboard { As you can see, the endpoint has decoder and encoder registered that decodes JSON to a Figure (a POJO class) and vice versa respectively. The decode method looks like: public Figure decode(String string) throws DecodeException { try { JSONObject jsonObject = new JSONObject(string); return new Figure(jsonObject); } catch (JSONException ex) { throw new DecodeException("Error parsing JSON", ex.getMessage(), ex.fillInStackTrace()); }} And the encode method looks like: public String encode(Figure figure) throws EncodeException { return figure.getJson().toString();} FigureDecoderEncoder implements both decoder and encoder functionality but thats purely for convenience. But the recommended design pattern is to keep them in separate classes. In certain cases, you may even need only one of them. On the client-side, the Canvas is initialized as: var canvas = document.getElementById("myCanvas");var context = canvas.getContext("2d");canvas.addEventListener("click", defineImage, false); The defineImage method constructs the JSON structure as shown above and sends it to the endpoint using websocket.send(). An instant snapshot of the canvas is sent using binary transfer with WebSocket. The WebSocket is initialized as: var wsUri = "ws://localhost:8080/whiteboard/websocket";var websocket = new WebSocket(wsUri);websocket.binaryType = "arraybuffer"; The important part is to set the binaryType property of WebSocket to arraybuffer. This ensures that any binary transfers using WebSocket are done using ArrayBuffer as the default type seem to be blob. The actual binary data transfer is done using the following: var image = context.getImageData(0, 0, canvas.width, canvas.height);var buffer = new ArrayBuffer(image.data.length);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = image.data[i];}websocket.send(bytes); This comprehensive sample shows the following features of JSR 356 API: Annotation-driven endpoints Send/receive text and binary payload in WebSocket Encoders/decoders for custom text payload In addition, it also shows how images can be captured and drawn using HTML5 Canvas in a JSP. How could this be turned in to an online game ? Imagine drawing a Tic-tac-toe board on the canvas with two players playing and others watching. Then you can build access rights and controls within the application itself. Instead of sending a snapshot of the canvas on demand, a new peer joining the game could be automatically transferred the current state as well. Do you want to build this game ? I built a similar game a few years ago. Do somebody want to rewrite the game using WebSocket APIs ? :-) Many thanks to Jitu and Akshay for helping through the WebSocket internals! Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Windows 8/Surface Lunch Event Summary

    - by Tim Murphy
    Today was a big day for Microsoft with two separate launch event.  The first for Windows 8 and all of it’s hardware partners.  The second was specifically to introduce the Microsoft Windows 8 Surface tablet.  Below are some of the take-aways I got from the webcasts. Windows 8 Launch The three general area that Microsoft focused on were the release of the OS itself, the public unveiling of the Windows Store and the new devices available from its hardware partners. The release of the OS focused on the fact that it will be available at mid-night tonight for both new PCs and for upgrades.  I can’t say that this interested me that much since it was already known to most people.  I think what they did show well was how easy the OS really is to use. The Windows Store is also not a new feature to those of us who have been running the pre-release versions of Windows 8 or have owned Windows Phone 7 for the past 2 years.  What was interesting is that the Windows Store launches with more apps available than any other platforms store at their respective launch.  I think this says a lot about how Microsoft focuses on the ability of developers to create software and make it available.  The of course were sure to emphasize that the Windows Store has better monetary terms for developers than its competitors. The also showed off the fact that XBox Music streaming is available for to all Windows 8 user for free.  Couple this with the Bing suite of apps that give you news, weather, sports and finance right out of the box and I think most people will find the environment a joy to use. I think the hardware demo, while quick and furious, really show where Windows shine: CHOICE!  They made a statement that over 1000 devices have been certified for Windows 8.  They showed tablets, laptops, desktops, all-in-ones and convertibles.  Since these devices have industry standard connectors they give a much wider variety of accessories and devices that you can use with them. Steve Balmer then came on stage and tried to see how many times he could use the “magical”.  He focused on how the Windows 8 OS is designed to integrate with SkyDrive, Skype and Outlook.com.  He also enforced that they think Windows 8 is the best choice for the Enterprise when it comes to protecting data and integrating across devices including Windows Phone 8. With that we were left to wait for the second event of the day. Surface Launch The second event of the day started with kids with magnets.  Ok, they were adults, but who doesn’t like playing with magnets.  Steven Sinofsky detached and reattached the Surface keyboard repeatedly, clearly enjoying himself.  It turns out that there are 4 magnets in the cover, 2 for alignment and 2 as connectors. They then went to giving us the details on the display.  The 10.6” display is optically bonded to the case and is optimized to reduce glare.  I think this came through very well in the demonstrations. The properties of the case were also a great selling point.  The VaporMg allowed them to drop the device on stage, on purpose, and continue working.  Of course they had to bring out the skate boards made from Surface devices. “It just has to feel right” was the reason they gave for many of their design decisions from the weight and size of the device to the way the kickstand and camera work together.  While this gave you the feeling that the whole process was trial and error you could tell that a lot of science went into the specs.  This included making sure that the magnets were strong enough to hold the cover on and still have a 3 year old remove the cover without effort. I am glad that they also decided the a USB port would be part of the spec since it give so many options.  They made the point that this allows Surface to leverage over 420 million existing devices.  That works for me. The last feature that I really thought was important was the microSD port.  Begin stuck with the onboard memory has been an aggravation of mine with many of the devices in the market today. I think they did job of really getting the audience to understand why you want this platform and this particular device.  Using personal examples like creating a video of a birthday party and being in it or the fact that the device was being used to live blog the event and control the lights and presentation.  They showed very well that it was not only fun but very capable of getting real work done.  Handing out tablets to the crowd didn’t hurt either.  In the end I really wanted a Surface even though I really have no need for one on a daily basis.  Great job Microsoft! del.icio.us Tags: Windows 8,Win8,Windows 8 Luanch

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • If I were in a Silverlight focus group, here is ten things I would say.

    - by mbcrump
    Silverlight is a great product right off the shelf. I use it, love it and spend a lot of time helping the community understand it. This however, doesn’t mean that I don’t think that it can get better. If I were invited to a Microsoft Focus Group about Silverlight here is 10 things I would say:  We need more navigation templates. I’ve found (4) templates that Microsoft has released (Cosmo, Windows 7, Accent and JetPack). This number needs to be around 16. In order to get more people developing for Silverlight, we need to give them a variety of templates to get them off the ground quickly. Silverlight needs to ship with the next version of Windows. At least version 4 needs to be pre-installed on Windows going forward. It’s small, in its own sandbox and I cannot find a reason for it not to be included. Silverlight needs to run on more platforms.  iOS and Android are the key here. I think Microsoft should shoot for Android first since I believe Android will take the lead in the mobile market (at least for the short-term). It would also be great to see Microsoft use Silverlight as the focus on their new tablets / “AppleTV”. I would even invest in getting it working with Kinect. When creating a new project in Silverlight, we should have the option to create a Unit Test. Most Silverlight developers are not unit testing. If this is surprising to you then you need to get out and talk to more developers. I partially blame this on Microsoft. When you create a new ASP.NET MVC application, you simply put a check to create a Unit Test project. We need the same thing for Silverlight. We should steer the developer into the right direction. Design patterns such as MVVM need to be easier to implement in Silverlight solutions.  I’d go so far as to say that MVVM Light should ship with Visual Studio. With the project / item templates and code snippets, Laurent puts you into the right direction. This is the way that it should have been. Easy for the 9-5 developer to grasp. I believe the majority of developers use code behind because that’s what is in all the demos provided by Microsoft. They are not trying to write sucky code it is that they simply don’t know a better way.  The XAP Files should be obfuscated/unused references deleted by default when in “Release” mode. A better Silverlight experience starts with a smaller XAP file. The less that a user has to download is the better, even with the majority of people on broadband. I would also recommend built-in obfuscation by Microsoft. People are paranoid that they can rename the .zip and run it through reflector. Get rid of the boring install experiences. Here is a great write up on what I’m talking about. The default “Install Silverlight” and “Loading screens” suck. They suck bad. We need a choice of templates that a professional designer has created.  Silverlight needs to supports more image formats. For example: it would be great to use .gif’s without converting them to .png.    Switching between Blend 4 and VS2010 to develop a Silverlight application is a pain. Probably one of the biggest issues that I can’t think of a good solution for. It would be nice if VS2012 had the best of both worlds and you never have to leave VS. We need reporting controls with SSRS included with the Silverlight Toolkit. I can’t think of another control that we need built into the toolkit. It would also be helpful to have export to .xls, .pdf and .doc included with the control. I hope that this post will at least get a few people talking. Who knows, Microsoft could be working on these things right now. Thanks for reading!  Subscribe to my feed CodeProject

    Read the article

  • Part 4 of 4 : Tips/Tricks for Silverlight Developers.

    - by mbcrump
    Part 1 | Part 2 | Part 3 | Part 4 I wanted to create a series of blog post that gets right to the point and is aimed specifically at Silverlight Developers. The most important things I want this series to answer is : What is it?  Why do I care? How do I do it? I hope that you enjoy this series. Let’s get started: Tip/Trick #16) What is it? Find out version information about Silverlight and which WebKit it is using by going to http://issilverlightinstalled.com/scriptverify/. Why do I care? I’ve had those users that its just easier to give them a site and say copy/paste the line that says User Agent in order to troubleshoot a Silverlight problem. I’ve also been debugging my own Silverlight applications and needed an easy way to determine if the plugin is disabled or not. How do I do it: Simply navigate to http://issilverlightinstalled.com/scriptverify/ and hit the Verify button. An example screenshot is located below: Results from Chrome 7 Results from Internet Explorer 8 (With Silverlight Disabled) Tip/Trick #17) What is it? Use Lambdas whenever you can. Why do I care?  It is my personal opinion that code is easier to read using Lambdas after you get past the syntax. How do I do it: For example: You may write code like the following: void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += new CheckAndDownloadUpdateCompletedEventHandler(Current_CheckAndDownloadUpdateCompleted); } void Current_CheckAndDownloadUpdateCompleted(object sender, CheckAndDownloadUpdateCompletedEventArgs e) { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } } To me this style forces me to look for the other Method to see what the code is actually doing. The style located below is much easier to read in my opinion and does the exact same thing. void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += (s, e) => { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } }; } Tip/Trick #18) What is it? Prevent development Web Service references from breaking when Visual Studio auto generates a new port number. Why do I care?  We have all been there, we are developing a Silverlight Application and all of a sudden our development web services break. We check and find out that the local port number that Visual Studio assigned has changed and now we need up to update all of our service references. We need a way to stop this. How do I do it: This can actually be prevented with just a few mouse click. Right click on your web solution and goto properties. Click the tab that says, Web. You just need to click the radio button and specify a port number. Now you won’t be bothered with that anymore. Tip/Trick #19) What is it? You can disable the Close Button a ChildWindow. Why do I care?  I wouldn’t blog about it if I hadn’t seen it. Devs trying to override keystrokes to prevent users from closing a Child Window. How do I do it: A property exist on the ChildWindow called “HasCloseButton”, you simply change that to false and your close button is gone. You can delete the “Cancel” button and add some logic to the OK button if you want the user to respond before proceeding. Tip/Trick #20) What is it? Cleanup your XAML. Why do I care?  By removing unneeded namespaces, not naming all of your controls and getting rid of designer markup you can improve code quality and readability. How do I do it: (This is a 3 in one tip) Remove unused Designer markup: 1) Have you ever wondered what the following code snippet does? xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" This code is telling the designer to do something special with this page in “Design mode” Specifically the width and the height of the page. When its running in the browser it will not use this information and it is actually ignored by the XAML parser. In other words, if you don’t need it then delete it. 2) If you are not using a namespace then remove it. In the code sample below, I am using Resharper which will tell me the ones that I’m not using by the grayed out line below. If you don’t have resharper you can look in your XAML and manually remove the unneeded namespaces. 3) Don’t name an control unless you actually need to refer to it in procedural code. If you name a control you will take a slight performance hit that is totally unnecessary if its not being called. <TextBlock Height="23" Text="TextBlock" />   That is the end of the series. I hope that you enjoyed it and please check out Part 1 | Part 2 | Part 3 if your hungry for more.  Subscribe to my feed CodeProject

    Read the article

  • Viewing the NetBeans Central Registry (Part 2)

    - by Geertjan
    Jens Hofschröer, who has one of the very best NetBeans Platform blogs (if you more or less understand German), and who wrote, sometime ago, the initial version of the Import Statement Organizer, as well as being the main developer of a great gear design & manufacturing tool on the NetBeans Platform in Aachen, commented on my recent blog entry "Viewing the NetBeans Central Registry", where the root Node of the Central Registry is shown in a BeanTreeView, with the words: "I wrapped that Node in a FilterNode to provide the 'position' attribute and the 'file extension'. All Children are wrapped too. Then I used an OutlineView to show these two properties. Great tool to find wrong layer entries." I asked him for the code he describes above and he sent it to me. He discussed it here in his blog, while all the code involved can be read below. The result is as follows, where you can see that the OutlineView shows information that my simple implementation (via a BeanTreeView) kept hidden: And so here is the definition of the Node. class LayerPropertiesNode extends FilterNode { public LayerPropertiesNode(Node node) { super(node, isFolder(node) ? Children.create(new LayerPropertiesFactory(node), true) : Children.LEAF); } private static boolean isFolder(Node node) { return null != node.getLookup().lookup(DataFolder.class); } @Override public String getDisplayName() { return getLookup().lookup(FileObject.class).getName(); } @Override public Image getIcon(int type) { FileObject fo = getLookup().lookup(FileObject.class); try { DataObject data = DataObject.find(fo); return data.getNodeDelegate().getIcon(type); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } return super.getIcon(type); } @Override public Image getOpenedIcon(int type) { return getIcon(type); } @Override public PropertySet[] getPropertySets() { Set set = Sheet.createPropertiesSet(); set.put(new PropertySupport.ReadOnly<Integer>( "position", Integer.class, "Position", null) { @Override public Integer getValue() throws IllegalAccessException, InvocationTargetException { FileObject fileEntry = getLookup().lookup(FileObject.class); Integer posValue = (Integer) fileEntry.getAttribute("position"); return posValue != null ? posValue : Integer.valueOf(0); } }); set.put(new PropertySupport.ReadOnly<String>( "ext", String.class, "Extension", null) { @Override public String getValue() throws IllegalAccessException, InvocationTargetException { FileObject fileEntry = getLookup().lookup(FileObject.class); return fileEntry.getExt(); } }); PropertySet[] original = super.getPropertySets(); PropertySet[] withLayer = new PropertySet[original.length + 1]; System.arraycopy(original, 0, withLayer, 0, original.length); withLayer[withLayer.length - 1] = set; return withLayer; } private static class LayerPropertiesFactory extends ChildFactory<FileObject> { private final Node context; public LayerPropertiesFactory(Node context) { this.context = context; } @Override protected boolean createKeys(List<FileObject> list) { FileObject folder = context.getLookup().lookup(FileObject.class); FileObject[] children = folder.getChildren(); List<FileObject> ordered = FileUtil.getOrder(Arrays.asList(children), false); list.addAll(ordered); return true; } @Override protected Node createNodeForKey(FileObject key) { AbstractNode node = new AbstractNode(org.openide.nodes.Children.LEAF, key.isFolder() ? Lookups.fixed(key, DataFolder.findFolder(key)) : Lookups.singleton(key)); return new LayerPropertiesNode(node); } } } Then here is the definition of the Action, which pops up a JPanel, displaying an OutlineView: @ActionID(category = "Tools", id = "de.nigjo.nb.layerview.LayerViewAction") @ActionRegistration(displayName = "#CTL_LayerViewAction") @ActionReferences({ @ActionReference(path = "Menu/Tools", position = 1450, separatorBefore = 1425) }) @Messages("CTL_LayerViewAction=Display XML Layer") public final class LayerViewAction implements ActionListener { @Override public void actionPerformed(ActionEvent e) { try { Node node = DataObject.find(FileUtil.getConfigRoot()).getNodeDelegate(); node = new LayerPropertiesNode(node); node = new FilterNode(node) { @Override public Component getCustomizer() { LayerView view = new LayerView(); view.getExplorerManager().setRootContext(this); return view; } @Override public boolean hasCustomizer() { return true; } }; NodeOperation.getDefault().customize(node); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } } private static class LayerView extends JPanel implements ExplorerManager.Provider { private final ExplorerManager em; public LayerView() { super(new BorderLayout()); em = new ExplorerManager(); OutlineView view = new OutlineView("entry"); view.addPropertyColumn("position", "Position"); view.addPropertyColumn("ext", "Extension"); add(view); } @Override public ExplorerManager getExplorerManager() { return em; } } }

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Spring??Java EE 6?! ???????????????·????????Java Developers Workshop 2012 Summer????

    - by ???02
    ?Java EE 6??????????????????????????????????????/?????????????????????????Spring Framework?????????????????????????????????????????????????????????????????????????????? “Spring to Java EE 6”????·????????????????????Java EE 6????????????????????IT???????????????Java??????????????·???????????????8???????Java Developers Workshop 2012 Summer????????????????Best Practices for Migrating Spring to Java EE 6???????????????(???) ???????????????Spring Framework??Java EE 6???? ?????IT??????????????????·?????????????????? “Spring to Java EE 6”?????????????????????????????????????????????????????????????????????? ???????????Java?????????????????????????????????Java??????????????????????????????????????????????????100??Java??????????????????????????????????? ???????Java?????????????????1??????????·??????????????Java?????????????????????????????Java???????????????????·??????????????????·????????????Java????????????????????????????Java??????????????????????????????? ??????????????????????????????????????????????????Spring??Java EE 6??????????·???????Spring????????????????????Java EE 6?????????????????????????????????????????????????????Java EE 6??????????????????? 8???????Java Developers Workshop 2012 Summer??????????Java??????????????????????????????????Best Practices for Migrating Spring to Java EE 6??????????“Spring to Java EE 6”????????????????????????????“Spring to Java EE 6”??????????????????? Java??????????“Spring to Java EE 6”????·?????? ?????Spring??Java EE????????????????????????????????????????? Spring????????·?????????????J2EE Design and Development?????J2EE 1.4??????????????????????????Spring ?????????????????????????????? ???Spring???8?????????????????????????????????????Java EE?????????????????????????????????????????????????? ???????Java EE??????????????????????????Java EE??????????????????????????????????????????????????????????????????????????????????Spring????????Java EE 6???????????????????????????? ???????????????????????????????Java EE?Spring?????????????????????????????????????????Spring??Java EE 6???????????????????????????????? ?????“??”Spring??Java EE 6?????????? ???????? “??”?Spring??Java EE 6?????????????????????????????????? Spring????????????????????????????5?6???????????Spring?????????????????????????????????????????????????????????????????????? ????Spring?????????????????????????????????????????????????????????????????????????????????????????????????????? ???Spring????????????5??10?????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????·????????????????Java EE 6??????????????1????????????? Java EE 6???????????3????????????????·?????????????????????????????????????????????????????????????Java EE 6???????????????????????? ?????1???????????????Spring????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????? Java EE???????????? ?????????????????????????Java EE?????????????????????????????Java EE??????????Spring?????????????????? ????Java EE?????? ??1????????Java EE?????????????????????Java EE?????????????????????????????????????????????????????????????? Java Developers Workshop 2012 Summer??????Java EE 6???????????? ????GlassFish?????????????????Java EE 6???WAR?????????????????????????????????????????????????????????????·????????????????????100KB?????????????????????????????????????????? Spring??????2????????Java EE???Dependency Injection(DI)??????????????DI??Spring?????????????????????????????Java EE 6???Context Dependency Injection(CDI)???????????DI?????????????Java EE 6????????????????CDI??????????????????? 3?????????????Aspect Oriented Programming(AOP)???????????????????????????????AOP?????????????????????????????????????????????????????????????AOP????????????????????????????????AOP??????????????????????????????????Java EE 6???Interceptor???????Spring AOP??????????????????????????????????????????? 4????????Java EE??????????IDE????????????????????EJB?????????????????????????????????????????Java EE 6?????????????????????????NetBeans?Eclipse?????????????IDE????????????????????????????????????????????????????????????????????????????????????????? ???????Spring?Java EE????????????? ??????????????????Spring??????????????????Java EE 6??????????? ????Java EE???????????(integration testing)?????????????????????Arquillian???????????????????????·????????Arqullian??????????????????????????????????????????????????????????????GlassFish?JBoss???????????·???????????JUnit??????·????????????????????????????? Spring??Java EE?????????5??????? Java EE 6??Spring?????????????????????????????Spring??Java EE 6???????????????????????????????????Spring????????????????????????????? ???????????“????·??·????”????????????????????????????????(????????·??)????????????Java EE 6?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????1-Spring?????????? ????????????????????Spring?????????????????XML?????????????????????O/R?????·?????????????????????Web????????·???????????????????????? ???????????????????????Spring??????????????????????Spring????????????????????????·???????????????????Spring 3.x??JPA?JSF????Java EE 6??????????????Spring???????????????????????Java EE 6??????????????????????????????????????????? ?????2-Spring?????????????????? ????????????????????????????Spring???????????????????????????????????????????????????Spring???????????????·???????????????????? ????????????????????????????????????“???(??)”???????Web MVC???Kodo??????????????JSF?JPA????????????????????????????????????????????????????????????????????????????????? ?????????????????JSF?????????????Playframework????????????????????????Java EE 6??????????????????????????????????????????????????????????Spring???API??????????????????? ?????3:Spring?????????Java EE 6???????????????? ??????????????????????????Spring??????Java EE????????·??????????????? ????Spring?????????????????????????·???????WAR?????????????????????????????Spring??????????????Spring Beans???????????????????????? ???Java EE??????????????????????????????????????????Java EE 6????????????????·???????CDI????EJB??????????????????????????WAR???????CDI Bean?Session Bean????????????????CDI/EJB????????????????????????????EJB??????Spring?????????????????????? Spring??????Java EE????????·?????????????????????????????????? ???????Spring??????????Java EE???????????WAR??????????????????????????Java EE?????????Spring??????????????????????Java EE??????????? ?????????????????????Spring?Java EE???????????????????????????????????????????????????????????????(???)??????????????????????3????????????????? ????????????Spring?Java EE?????????????? Spring????????????????????????EJB????????CDI??????????????????????????????????????????????????????????CDI????????@Inject????????????????????????? ?????4:Spring??????????? 4??????????????Spring??????????????????????3???????Spring?Java EE??????????? ????3???Java EE 6?????????????Spring???????????????????Java EE 6??????????????????????Java EE 6??????????????????????????????? ?????????Spring JDBC???????????????????????????????????????Spring JDBC????????????????????????????????????? ??????????????·?????????????????????????????????????????????Spring??????????EJB?DAO(Data Access Object)???????????Spring???????????????????????Java EE?????????(???????????????·???????????)? Java EE 6???EJB?JPA??????????????????EJB?????????????????????????JPA????????????????????????Java EE???????????????EJB???????????????????????Java EE 6???EJB????????????????????Spring DAO?Java EE????????????????????????????EJB???????????????Java EE 6?????EJB?????????????????????? Spring?JDBC Templates??????????????“???”?????????O/R????????????????JDBC Templates???????????????????????????????????????O/R???????????????????? ????????????????????JDBC Template??????????????????????????????????????????????????????????????CDI?????????????JDBC Template??????Spring???????????????????????????????????????JDBC Template?Spring????????????????????????????????Template Producer???????CDI??JDBC Template???????????? ?????5:Spring????????? ???????????Spring?????????????????????????????Spring??????????????????Java EE 6?????????????????????pom.xml?????????????????????????????????????????????·????????????WAR????????????????????? ?2????????????? ???Spring??Java EE 6????????????????????????????????????????????????????????????????????????? ????????????????2???????1????????????????????????????????????????????????????????????????????????????????Spring????????????????????????????????????????????????????????????????????????????????????????????????????10????????????????????????????????????????Java EE?????10??????????????·?????????????????????????????????????????? ??????????????????????????????????Spring???????????????????????????????Java EE?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????“?”???????? Spring????????????·????????????????????????????????????JCP???????·?????????????????????????????????????????????????????????????????????????????????????????????????Hibernate?JPA????????????Spring???????????????? ?????????????????????!?Java Developers Workshop 2012 Summer???????????????!! ???Java Developers Workshop 2012 Summer??????????????Oracle Technology Network?????·???? ????????????????????????????????????????????????OTN??????????????? ? ????????? ? Oracle Technology Network?????·???? ???? ?Java Developers Workshop 2012 Summer?– Java EE ?????? – ? ??????????oracle.com???????????????oracle.com????????????????????????????(???????)????????????????????????????????????????????????????????????????????

    Read the article

  • Xml failing to deserialise

    - by Carnotaurus
    I call a method to get my pages [see GetPages(String xmlFullFilePath)]. The FromXElement method is supposed to deserialise the LitePropertyData elements to strongly type LitePropertyData objects. Instead it fails on the following line: return (T)xmlSerializer.Deserialize(memoryStream); and gives the following error: <LitePropertyData xmlns=''> was not expected. What am I doing wrong? I have included the methods that I call and the xml data: public static T FromXElement<T>(this XElement xElement) { using (var memoryStream = new MemoryStream(Encoding.ASCII.GetBytes(xElement.ToString()))) { var xmlSerializer = new XmlSerializer(typeof(T)); return (T)xmlSerializer.Deserialize(memoryStream); } } public static List<LitePageData> GetPages(String xmlFullFilePath) { XDocument document = XDocument.Load(xmlFullFilePath); List<LitePageData> results = (from record in document.Descendants("row") select new LitePageData { Guid = IsValid(record, "Guid") ? record.Element("Guid").Value : null, ParentID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Created = Convert.ToDateTime(record.Element("Created").Value), Changed = Convert.ToDateTime(record.Element("Changed").Value), Name = record.Element("Name").Value, ID = Convert.ToInt32(record.Element("ID").Value), LitePageTypeID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Html = record.Element("Html").Value, FriendlyName = record.Element("FriendlyName").Value, Properties = record.Element("Properties") != null ? record.Element("Properties").Element("LitePropertyData").FromXElement<List<LitePropertyData>>() : new List<LitePropertyData>() }).ToList(); return results; } Here is the xml: <?xml version="1.0" encoding="utf-8"?> <root> <rows> <row> <ID>1</ID> <ImageUrl></ImageUrl> <Html>Home page</Html> <Created>01-01-2012</Created> <Changed>01-01-2012</Changed> <Name>Home page</Name> <FriendlyName>home-page</FriendlyName> </row> <row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Guid>edeaf468-f490-4271-bf4d-be145bc6a1fd</Guid> <ID>8</ID> <Name>Unused</Name> <ParentID>1</ParentID> <Created>2006-03-25T10:57:17</Created> <Changed>2012-07-17T12:24:30.0984747+01:00</Changed> <ChangedBy /> <LitePageTypeID xsi:nil="true" /> <Html> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Html> <FriendlyName>unused</FriendlyName> <IsDeleted>false</IsDeleted> <Properties> <LitePropertyData> <Description>Image for the page</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>Image Url</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>ImageUrl</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>String</Type> <Value xsi:type="xsd:string">smarter.jpg</Value> </LitePropertyData> <LitePropertyData> <Description>WebItemApplicationEnum</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>WebItemApplicationEnum</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>WebItemApplicationEnum</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>Number</Type> <Value xsi:type="xsd:string">1</Value> </LitePropertyData> </Properties> <Seo> <Author>Phil Carney</Author> <Classification /> <Copyright>Carnotaurus</Copyright> <Description> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Description> <Keywords>unused</Keywords> <Title>unused</Title> </Seo> </row> </rows> </root> EDIT Here are my entities: public class LitePropertyData { public virtual string Description { get; set; } public virtual bool DisplayEditUI { get; set; } public int OwnerTab { get; set; } public virtual string DisplayName { get; set; } public int FieldOrder { get; set; } public bool IsRequired { get; set; } public string Name { get; set; } public virtual bool IsModified { get; set; } public virtual int ParentPageID { get; set; } public LiteDataType Type { get; set; } public object Value { get; set; } } [Serializable] public class LitePageData { public String Guid { get; set; } public Int32 ID { get; set; } public String Name { get; set; } public Int32? ParentID { get; set; } public DateTime Created { get; set; } public String CreatedBy { get; set; } public DateTime Changed { get; set; } public String ChangedBy { get; set; } public Int32? LitePageTypeID { get; set; } public String Html { get; set; } public String FriendlyName { get; set; } public Boolean IsDeleted { get; set; } public List<LitePropertyData> Properties { get; set; } public LiteSeoPageData Seo { get; set; } /// <summary> /// Saves the specified XML full file path. /// </summary> /// <param name="xmlFullFilePath">The XML full file path.</param> public void Save(String xmlFullFilePath) { XDocument doc = XDocument.Load(xmlFullFilePath); XElement demoNode = this.ToXElement<LitePageData>(); demoNode.Name = "row"; doc.Descendants("rows").Single().Add(demoNode); doc.Save(xmlFullFilePath); } }

    Read the article

  • Trac vs. Redmine vs. JIRA vs. FogBugz for one-man shop?

    - by kizzx2
    Background I am a one-man freelancer looking for a project management software that can provide the following requirements. I have used Trac for about a year now. Tried Redmine and FogBugz on Demand for a couple of weeks. Never tried JIRA before. Basically, I'm looking for a piece of software that: Facilitates developer-client communication/collaboration Does time tracking Requirements Record time estimates/Time tracking Clients must be able to create/edit his own tickets/cases Clients must not see Developer created tickets/cases (internal) Affordable (price) with multiple clients Nice-to-haves Supports multiple projects in one installation Free eclipse integration (Mylyn) Easy time-tracking without using the Web UI (Trac's post commit hook or Redmine's commit message scanning) Clients can access the Wiki Export the data to standard formats My evaluation Trac can basically fulfill most of the above requirements, but with lots of customizations and plug-ins that it doesn't feel so clean. One downside is that the main trunk (0.11) has been around for a year or more and I still haven't seen much tendency of any upgrades coming up. Redmine has the cleanest Web UI. It's design philosophy seems to be the most elegant, with its innovative commit message scanning and stuff. However, the current version doesn't seem to be very mature and stable yet. It doesn't support internal (private) tickets and the time-tracking commit message patch doesn't support the trunk version. The good thing about it is that the main trunk still seems to be actively developed. FogBugz is actually a very well written piece of software. However the idea of paying $25/month for the client to be able to log-in to the system seems a little bit too far off for an individual developer. The free version supports letting clients create/view their own cases using email, which is a sub-optimal alternative to having a full-fledged list of the user's own cases. That also means clients can't read/write wiki pages. Its time-tracking approach is innovative and good though. However the fact that all the eclipse integration (Bugclipse, Foglyn) are commercial. Yet other investments before I can use my bug-tracker! If I revert back to the Web UI, it's not really a fast rendering Web service. Also, the in-built report functions are excellent (e.g. evidence based scheduling) JIRA is something I have zero experience with. Can someone with JIRA experience recommend why it might be a good fit for this particular situation? Question Can we share experience on this? Any specific plugins/customizations would that would best suit the requirements for this case?

    Read the article

  • Combine config-paramters with parameters passed from commanline

    - by Frederik
    I have created a SSIS-package that imports a file into a table (simple enough). I have some variables, a few set in a config-file such as server, database, importfolder. at runtime I want to pass the filename. This is done through a stored procedure using dtexec. When setting the paramters throught the configfile it works fine also when setting all parameters in the procedure and passing them with the \Set statement (se below). when I try to combine the config-version with settings parameters on the fly I get an error refering to the config-files path that was set at design time. Has anybody come across this and found a solution for it? Regards Frederik DECLARE @SSISSTR VARCHAR(8000), @DataBaseServer VARCHAR(100), @DataBaseName VARCHAR(100), @PackageFilePath VARCHAR(200), @ImportFolder VARCHAR(200), @HandledFolder VARCHAR(200), @ConfigFilePath VARCHAR(200), @SSISreturncode INT; /* DEBUGGING DECLARE @FileName VARCHAR(100), @SelectionId INT SET @FileName = 'Test.csv'; SET @SelectionId = 366; */ SET @PackageFilePath = '/FILE "Y:\SSIS\Packages\PostalCodeSelectionImport\ImportPackage.dtsx" '; SET @DataBaseServer = 'STOSWVUTVDB01\DEV_BSE'; SET @DataBaseName = 'BSE_ODR'; SET @ImportFolder = '\\Stoswvutvbse01\Application\FileLoadArea\ODR\\'; SET @HandledFolder = '\\Stoswvutvbse01\Application\FileLoadArea\ODR\Handled\\'; --SET @ConfigFilePath = '/CONFIGFILE "Y:\SSIS\Packages\PostalCodeSelectionImport\Configuration\DEV_BSE.dtsConfig" '; ----now making "dtexec" SQL from dynamic values SET @SSISSTR = 'DTEXEC ' + @PackageFilePath; -- + @ConfigFilePath; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::SelectionId].Properties[Value];' + CAST( @SelectionId AS VARCHAR(12)); SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::DataBaseServer].Properties[Value];"' + @DataBaseServer + '"'; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::ImportFolder].Properties[Value];"' + @ImportFolder + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::DataBaseName].Properties[Value];"' + @DataBaseName + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::ImportFileName].Properties[Value];"' + @FileName + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::HandledFolder].Properties[Value];"' + @HandledFolder + '" '; -- Now execute dynamic SQL by using EXEC. EXEC @SSISreturncode = xp_cmdshell @SSISSTR;

    Read the article

  • WPF - Overlapping Custom Tabs in a TabControl and ZIndex

    - by Rachel
    Problem I have a custom tab control using Chrome-shaped tabs that binds to a ViewModel. Because of the shape, the edges overlap a bit. I have a function that sets the tabItem's ZIndex on TabControl_SelectionChanged which works fine for selecting tabs, and dragging/dropping tabs, however when I Add or Close a tab via a Relay Command I am getting unusual results. Does anyone have any ideas? Default View http://i193.photobucket.com/albums/z197/Lady53461/tabs_default.jpg Removing Tabs http:/i193.photobucket.com/albums/z197/Lady53461/tabs_removing.jpg Adding 2 or more Tabs in a row http:/i193.photobucket.com/albums/z197/Lady53461/tabs_adding.jpg Code to set ZIndex private void PrimaryTabControl_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (e.Source is TabControl) { TabControl tabControl = sender as TabControl; ItemContainerGenerator icg = tabControl.ItemContainerGenerator; if (icg.Status == System.Windows.Controls.Primitives.GeneratorStatus.ContainersGenerated) { foreach (object o in tabControl.Items) { UIElement tabItem = icg.ContainerFromItem(o) as UIElement; Panel.SetZIndex(tabItem, (o == tabControl.SelectedItem ? 100 : 90 - tabControl.Items.IndexOf(o))); } } } } By using breakpoints I can see that it is correctly setting the ZIndex to what I want it to, however the layout is not displaying the changes. I know some of the changes are in effect because if none of them were working then the tab edges would be reversed (the right tabs would be drawn on top of the left ones). Clicking a tab will correctly set the zindex of all tabs (including the one that should be drawn on top) and dragging/dropping them to rearrange them also renders correctly (which removes and reinserts the tab item). The only difference I can think of is I am using the MVVM design pattern and the buttons that Add/Close tabs are relay commands. Does anyone have any idea why this is happening and how I can fix it?? p.s. I did try setting a ZIndex in my ViewModel and binding to it, however the same thing happens when adding/removing tabs via the relay command. EDIT: Being a new user I couldn't post images and could only post 1 link. Images just show a picture of what the tags render as after each scenario. Adding more then 1 at a time will not reset the zindex of other recently-added tabs so they go behind the tab on the Right, and closing tabs does not correctly render the ZIndex of the SelectedTab that replaces it and it shows up behind the tab on its right.

    Read the article

  • SQLiteDataAdapter Fill exception

    - by Lirik
    I'm trying to use the OleDb CSV parser to load some data from a CSV file and insert it into a SQLite database, but I get an exception with the OleDbAdapter.Fill method and it's frustrating: An unhandled exception of type 'System.Data.ConstraintException' occurred in System.Data.dll Additional information: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Here is the source code: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=No;FMT=Delimited""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); // <-- Exception here InsertData(ds, tableName); // <-- Inserts the data into the my SQLite db } } } class Program { static void Main(string[] args) { SQLiteDatabase target = new SQLiteDatabase(); string csvFileName = "D:\\Innovations\\Finch\\dev\\DataFeed\\YahooTagsInfo.csv"; string tableName = "Tags"; target.InsertData(csvFileName, tableName); Console.ReadKey(); } } The "YahooTagsInfo.csv" file looks like this: tagId,tagName,description,colName,dataType,realTime 1,s,Symbol,symbol,VARCHAR,FALSE 2,c8,After Hours Change,afterhours,DOUBLE,TRUE 3,g3,Annualized Gain,annualizedGain,DOUBLE,FALSE 4,a,Ask,ask,DOUBLE,FALSE 5,a5,Ask Size,askSize,DOUBLE,FALSE 6,a2,Average Daily Volume,avgDailyVolume,DOUBLE,FALSE 7,b,Bid,bid,DOUBLE,FALSE 8,b6,Bid Size,bidSize,DOUBLE,FALSE 9,b4,Book Value,bookValue,DOUBLE,FALSE I've tried the following: Removing the first line in the CSV file so it doesn't confuse it for real data. Changing the TRUE/FALSE realTime flag to 1/0. I've tried 1 and 2 together (i.e. removed the first line and changed the flag). None of these things helped... One constraint is that the tagId is supposed to be unique. Here is what the table look like in design view: Can anybody help me figure out what is the problem here? Update: I changed the HDR property from HDR=No to HDR=Yes and now it doesn't give me an exception: OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=Yes;FMT=Delimited"""); I assumed that if HDR=No and I removed the header (i.e. first line), then it should work... strangely it didn't work. In any case, now I'm no longer getting the exception. The new problem is here: public void InsertData(QuoteDataSet data, String tableName) { using (SQLiteConnection conn = new SQLiteConnection(_connectionString)) { conn.Open(); using (SQLiteDataAdapter sqliteAdapter = new SQLiteDataAdapter("SELECT * FROM " + tableName, conn)) { Console.WriteLine("Num rows updated is " + sqliteAdapter.Update(data, tableName)); } } } Now the Num rows updated is 0... any hints?

    Read the article

  • asp.net mvc2 - controller for master page and code organization

    - by ile
    I've just finished my first ASP.NET MVC (2) CMS. Next step is to build website that will show data from CMS's database. This is website design: #1 (Red box) - displays article categories. ViewModel: public class CategoriesDisplay { public CategoriesDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } } #2 (Brown box) - displays last x articles; skips those from green box #3. Viewmodel: public class ArticleDisplay { public ArticleDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } public int ArticleID { set; get; } public string ArticleTitle { set; get; } public string URLArticleTitle { set; get; } public DateTime ArticleDate; public string ArticleContent { set; get; } } #3 (green box) - Displays last x articles. Uses the same ViewModel as brown box #2 #4 (blue box) - Displays list of upcoming events. Uses dataContext.Model.Event as ViewModel Boxes #1, #2 and #4 will repeat all over the site and they are part of Master Page. So, my question is: what is the best way to transfer this data from Model to Controller and finally to View pages? Should I make a controller for master page and ViewModel class that will wrap all this classes together OR Should I create partial Views for every of these boxes and make each of them inherit appropriate class (if it is even possible that it works this way?) OR Should I put this repeated code in all controllers and all additional data transfer via ViewData, which would be probably the worse way :) OR There is maybe a better and more simple way but I don't know/see it? Thanks in advance, Ile EDIT: If your answer is #1, then please explain how to make a controller for master page! EDIT 2: In this tutorial is described how to pass data to master page using abstract class: http://www.asp.net/LEARN/mvc/tutorial-13-cs.aspx In "Listing 5 – Controllers\MoviesController.cs", data is retrieved directly from database using LINQ, not from repository. So, I wonder if this is just in this tutorial, or there is some trick here and repository can't/shouldn't be used?

    Read the article

  • ASP.NET GridView second header row to span main header row

    - by Dana Robinson
    I have an ASP.NET GridView which has columns that look like this: | Foo | Bar | Total1 | Total2 | Total3 | Is it possible to create a header on two rows that looks like this? | | Totals | | Foo | Bar | 1 | 2 | 3 | The data in each row will remain unchanged as this is just to pretty up the header and decrease the horizontal space that the grid takes up. The entire GridView is sortable in case that matters. I don't intend for the added "Totals" spanning column to have any sort functionality. Edit: Based on one of the articles given below, I created a class which inherits from GridView and adds the second header row in. namespace CustomControls { public class TwoHeadedGridView : GridView { protected Table InnerTable { get { if (this.HasControls()) { return (Table)this.Controls[0]; } return null; } } protected override void OnDataBound(EventArgs e) { base.OnDataBound(e); this.CreateSecondHeader(); } private void CreateSecondHeader() { GridViewRow row = new GridViewRow(0, -1, DataControlRowType.Header, DataControlRowState.Normal); TableCell left = new TableHeaderCell(); left.ColumnSpan = 3; row.Cells.Add(left); TableCell totals = new TableHeaderCell(); totals.ColumnSpan = this.Columns.Count - 3; totals.Text = "Totals"; row.Cells.Add(totals); this.InnerTable.Rows.AddAt(0, row); } } } In case you are new to ASP.NET like I am, I should also point out that you need to: 1) Register your class by adding a line like this to your web form: <%@ Register TagPrefix="foo" NameSpace="CustomControls" Assembly="__code"%> 2) Change asp:GridView in your previous markup to foo:TwoHeadedGridView. Don't forget the closing tag. Another edit: You can also do this without creating a custom class. Simply add an event handler for the DataBound event of your grid like this: protected void gvOrganisms_DataBound(object sender, EventArgs e) { GridView grid = sender as GridView; if (grid != null) { GridViewRow row = new GridViewRow(0, -1, DataControlRowType.Header, DataControlRowState.Normal); TableCell left = new TableHeaderCell(); left.ColumnSpan = 3; row.Cells.Add(left); TableCell totals = new TableHeaderCell(); totals.ColumnSpan = grid.Columns.Count - 3; totals.Text = "Totals"; row.Cells.Add(totals); Table t = grid.Controls[0] as Table; if (t != null) { t.Rows.AddAt(0, row); } } } The advantage of the custom control is that you can see the extra header row on the design view of your web form. The event handler method is a bit simpler, though.

    Read the article

  • Nasty mono bug with F#

    - by Aurimas Anskaitis
    Hi, I have this monstrous f# 2.0 program My mono version is Mono JIT compiler version 2.9 (master/f593354 Sun Dec 26 03:15:55 EET 2010) Copyright (C) 2002-2010 Novell, Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: x86 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: Included Boehm (with typed GC and Parallel Mark) //--------------------------------------------- module Main let rec gcd x y = if y = 0 then x else gcd y (x%y) let main = printfn "%i" (gcd 4 2) main //----------------------------------------------- And the problem is that output from running the program is as follows: Stacktrace: at (wrapper managed-to-native) System.Reflection.MonoMethodInfo.get_parameter_info (intptr,System.Reflection.MemberInfo) <0xffffffff at System.Reflection.MonoMethodInfo.GetParametersInfo (intptr,System.Reflection.MemberInfo) <0x00013 at System.Reflection.MonoCMethod.GetParameters () <0x00015 at System.Reflection.MonoCMethod.Invoke (object,System.Reflection.BindingFlags,System.Reflection.Binder,object[],System.Globalization.CultureInfo) <0x00035 at System.Reflection.MonoCMethod.Invoke (System.Reflection.BindingFlags,System.Reflection.Binder,object[],System.Globalization.CultureInfo) <0x00024 at System.Reflection.ConstructorInfo.Invoke (object[]) <0x0003f at System.Activator.CreateInstance (System.Type,bool) <0x0017c at System.Activator.CreateInstance (System.Type) <0x00012 at Microsoft.FSharp.Reflection.FSharpValue.MakeFunction (System.Type,Microsoft.FSharp.Core.FSharpFunc2<object, object>) <0x00145> at Microsoft.FSharp.Core.PrintfImpl.capture@529<b, c, d> (Microsoft.FSharp.Core.FSharpFunc2, Microsoft.FSharp.Core.FSharpFunc`2<char, Microsoft.FSharp.Core.Unit>, Microsoft.FSharp.Core.FSharpFunc`2,string,int,Microsoft.FSharp.Collections.FSharpList1<object>,System.Type,int) <0x00147> at Microsoft.FSharp.Core.PrintfImpl.gprintf<b, c, d, a> (Microsoft.FSharp.Core.FSharpFunc2, Microsoft.FSharp.Core.FSharpFunc`2<char, Microsoft.FSharp.Core.Unit>, Microsoft.FSharp.Core.FSharpFunc`2,Microsoft.FSharp.Core.PrintfFormat4<a, b, c, d>) <0x000dd> at Microsoft.FSharp.Core.PrintfModule.kprintf_imperative<a, b, c> (Microsoft.FSharp.Core.FSharpFunc2,b,Microsoft.FSharp.Core.FSharpFunc2<char, Microsoft.FSharp.Core.Unit>,Microsoft.FSharp.Core.PrintfFormat4) <0x00058 at Microsoft.FSharp.Core.PrintfModule.PrintFormatToTextWriterThen (Microsoft.FSharp.Core.FSharpFunc2<Microsoft.FSharp.Core.Unit, TResult>,System.IO.TextWriter,Microsoft.FSharp.Core.PrintfFormat4) <0x0004d at Microsoft.FSharp.Core.PrintfModule.PrintFormatLineToTextWriter (System.IO.TextWriter,Microsoft.FSharp.Core.PrintfFormat`4) <0x0004d at .$Main.main@ () <0x00042 Native stacktrace: mono() [0x80dc13b] mono() [0x811c65b] mono() [0x8059a11] [0x7af40c] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228282] mono() [0x822991e] mono() [0x822aa9d] mono(mono_array_new_specific+0xea) [0x813ba9a] mono() [0x81c63a1] mono() [0x8149ac8] [0xc04328] [0xc042e4] [0xc042be] [0xc0455e] [0xc0451d] [0xc044d8] [0xc0349d] [0xc0330b] [0xc02f9e] [0xbfe960] [0xbfe6c6] [0xbfe571] [0xbfe4de] [0xbfe44e] [0xbf9d2b] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] Debug info from gdb: Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf ptrace: Operation not permitted. ================================================================= Got a SIGSEGV while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. Aborted It is a huge problem with mono or f#? By the way, the same function works when using pattern matching instead of "if".

    Read the article

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • WPF Formatting Issues - Automatically stretching and resizing?

    - by Adam S
    I'm very new to WPF and XAML. I am trying to design a basic data entry form. I have used a stack panel holding four more stack panels to get the layout I want. Perhaps a grid would be better for this, I am not sure. Here is an image of my form in action: http://yfrog.com/7gscreenshot1impp And here is the XAML code that generates it: <Window x:Class="Test1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="224" Width="536.762"> <StackPanel Height="Auto" Name="stackPanel1" Width="Auto" Orientation="Horizontal"> <StackPanel Height="Auto" Name="stackPanel2" Width="Auto"> <Label Height="Auto" Name="label1" Width="Auto">Patient Name:</Label> <Label Height="Auto" Name="label2" Width="Auto">Physician:</Label> <Label Height="Auto" Name="label3" Width="Auto">Insurance:</Label> <Label Height="Auto" Name="label4" Width="Auto">Therapy Goals:</Label> </StackPanel> <StackPanel Height="Auto" Name="stackPanel3" Width="Auto"> <TextBox Height="Auto" Name="textBox1" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox2" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox3" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox4" Width="Auto" Padding="3" Margin="1" /> </StackPanel> <StackPanel Height="Auto" Name="stackPanel4" Width="Auto"> <Label Height="Auto" Name="label5" Width="Auto">Date:</Label> <Label Height="Auto" Name="label6" Width="Auto">Patient Phone:</Label> <Label Height="Auto" Name="label7" Width="Auto">Facility:</Label> <Label Height="Auto" Name="label8" Width="Auto">Referring Physician:</Label> </StackPanel> <StackPanel Height="Auto" Name="stackPanel5" Width="Auto"> <TextBox Height="Auto" Name="textBox5" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox6" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox7" Width="Auto" Padding="3" Margin="1" /> <TextBox Height="Auto" Name="textBox8" Width="Auto" Padding="3" Margin="1" /> </StackPanel> </StackPanel> </Window> What I really want is for the text boxes to stretch equally to fill up the space horizontally. I would also like for the controls in each vertical stackpanel to 'spread out' evenly as the window is resized vertically. Can any of you experts out there help me out?

    Read the article

< Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >