Search Results

Search found 1095 results on 44 pages for 'aspects'.

Page 43/44 | < Previous Page | 39 40 41 42 43 44  | Next Page >

  • How can I tell the size of my app during development?

    - by Newbyman
    My programming decissions are directly related to how much room I have left, or worse perhaps how much I need to shave off in order to get up the 10mb limit. I have read that Apple has quietly increased the 3G & Edge download limit from 10mb up to 20mb in preparation for the iPad in April. Either way, my real question is how can I gauge a rough estimate of how large my app will end while I'm still in the development phase? Is the file size of my development folder roughly 1 to 1 ratio? Is the compressed file size of my development a better approximation? My .xcodeproj file is only a couple hundred kB, but the size of my folder is 11.8 MB. I have a .sqlite database, less than 20 small png images and a Settings.Bundle. The rest are unknown Xcode files related to build, build for iphoneOS, simulator etc.... My source code is rather large with around 1000 lines in most of the major controllers, all in all around 48 .h&.m files. But my classes folder inside my development folder is less than 800kb. Digging around inside my Build file, there is lots of iphone simulator files and debugging files which I don't think will contribute to the final product. The Application file states that it is around 2.3 MB. However, this is such a large difference from the 11.8 MB, I have to wonder if this is just another piece of the equation. I have the app on the my device, I'm in the testing phase. Therefore, I though that I would try to see how large the working version was on the device by checking in iTunes, however my development app is visible on the right-hand the application's iphone screen, but no information about the app most importantly its size. I also checked in Organizer, I used the lower portion of the screen-(Applications), found my application and selected the drop down arrow which gave my "Application Data" and a download arrow button to the right to save a file on my desktop, named with the unique AppleID. Inside the folder it had three folders-(documents, library, tmp) the documents had a copy of my .sqlite database, the library a few more files but not anything obvious or of size, and the tmp was empty. All in all the entire folder was only 164kb-which tells me that this is not the right place to find the size either. I understand that the size is considered to be the size of my binary plus all the additional files and images that I have add. Does anyone have a effective way of guaging how large the binary is or the relating the development folder size to what the final App Store application size will end up. I know that questions have been posted with similar aspects, but I could not find any answered post that really described...what files, or how to determine size specifically. I know that this question looks like a book, but I just wanted to be specific in conveying exactly what I'm looking for and the attempts thus far. *Note all files are unzipped and still in regular working Xcode order of a single app with no brought-in builds or referenced projects. I'm sure that this is straight forward, I just don't know where to look?

    Read the article

  • MVC design in Cocoa - are all 3 always necessary? Also: naming conventions, where to put Controller

    - by Nektarios
    I'm new to MVC although I've read a lot of papers and information on the web. I know it's somewhat ambiguous and there are many different interpretations of MVC patterns.. but the differences seem somewhat minimal My main question is - are M, V, and C always going to be necessary to be doing this right? I haven't seen anyone address this in anything I've read. Examples (I'm working in Cocoa/Obj-c although that shouldn't much matter).. 1) If I have a simple image on my GUI, or a text entry field that is just for a user's convenience and isn't saved or modified, these both would be V (view) but there's no M (no data and no domain processing going on), and no C to bridge them. So I just have some aspects that are "V" - seems fine 2) I have 2 different and visible windows that each have a button on them labeled as "ACTIVATE FOO" - when a user clicks the button on either, both buttons press in and change to say "DEACTIVATE FOO" and a third window appears with label "FOO". Clicking the button again will change the button on both windows to "ACTIVATE FOO" and will remove the third "FOO" window. In this case, my V consists of the buttons on both windows, and I guess also the third window (maybe all 3 windows). I definitely have a C, my Controller object will know about these buttons and windows and will get their clicks and hold generic states regarding windows and buttons. However, whether I have 1 button or 10 button, my window is called "FOO" or my window is called "BAR", this doesn't matter. There's no domain knowledge or data here - just control of views. So in this example, I really have "V" and "C" but no "M" - is that ok? 3) Final example, which I am running in to the most. I have a text entry field as my View. When I enter text in this, say a number representing gravity, I keep it in a Model that may do things like compute physics of a ball while taking in to account my gravity parameter. Here I have a V and an M, but I don't understand why I would need to add a C - a controller would just accept the signals from the View and pass it along to the Model, and vice versa. Being as the C is just a pure passthrough, it's really "junk" code and isn't making things any more reusable in my opinion. In most situations, when something changes I will need to change the C and M both in nearly identical ways. I realize it's probably an MVC beginner's mistake to think most situations call for only V and M.. leads me in to next subject 4) In Cocoa / Xcode / IB, I guess my Controllers should always be an instantiated object in IB? That is, I lay all of my "V" components in IB, and for each collection of View objects (things that are related) I should have an instantiated Controller? And then perhaps my Models should NOT be found in IB, and instead only found as classes in Xcode that tie in with Controller code found there. Is this accurate? This could explain why you'd have a Controller that is not really adding value - because you are keeping consistent.. 5) What about naming these things - for my above example about FOO / BAR maybe something that ends in Controller would be the C, like FancyWindowOpeningController, etc? And for models - should I suffix them with like GravityBallPhysicsModel etc, or should I just name those whatever I like? I haven't seen enough code to know what's out there in the wild and I want to get on the right track early on Thank you in advance for setting me straight or letting me know I'm on the right track. I feel like I'm starting to get it and most of what I say here makes sense, but validation of my guessing would help me feel confident..

    Read the article

  • How to best propagate changes upwards a hierarchical structure for binding?

    - by H.B.
    If i have a folder-like structure that uses the composite design pattern and i bind the root folder to a TreeView. It would be quite useful if i can display certain properties that are being accumulated from the folder's contents. The question is, how do i best inform the folder that changes occurred in a child-element so that the accumulative properties get updated? The context in which i need this is a small RSS-FeedReader i am trying to make. This are the most important objects and aspects of my model: Composite interface: public interface IFeedComposite : INotifyPropertyChanged { string Title { get; set; } int UnreadFeedItemsCount { get; } ObservableCollection<FeedItem> FeedItems { get; } } FeedComposite (aka Folder) public class FeedComposite : BindableObject, IFeedComposite { private string title = ""; public string Title { get { return title; } set { title = value; NotifyPropertyChanged("Title"); } } private ObservableCollection<IFeedComposite> children = new ObservableCollection<IFeedComposite>(); public ObservableCollection<IFeedComposite> Children { get { return children; } set { children.Clear(); foreach (IFeedComposite item in value) { children.Add(item); } NotifyPropertyChanged("Children"); } } public FeedComposite() { } public FeedComposite(string title) { Title = title; } public ObservableCollection<FeedItem> FeedItems { get { ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); foreach (IFeedComposite child in Children) { foreach (FeedItem item in child.FeedItems) { feedItems.Add(item); } } return feedItems; } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } Feed: public class Feed : BindableObject, IFeedComposite { private string url = ""; public string Url { get { return url; } set { url = value; NotifyPropertyChanged("Url"); } } ... private ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); public ObservableCollection<FeedItem> FeedItems { get { return feedItems; } set { feedItems.Clear(); foreach (FeedItem item in value) { AddFeedItem(item); } NotifyPropertyChanged("Items"); } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } public Feed() { } public Feed(string url) { Url = url; } Ok, so here's the thing, if i bind a TextBlock.Text to the UnreadFeedItemsCount there won't be simple notifications when an item is marked unread, so one of my approaches has been to handle the PropertyChanged event of every FeedItem and if the IsUnread-Property is changed i have my Feed make a notification that the property UnreadFeedItemsCount has been changed. With this approach i also need to handle all PropertyChanged events of all Feeds and FeedComposites in Children of FeedComposite, from the sound of it, it should be obvious that this is not such a very good idea, you need to be very careful that items never get added or removed to any collection without having attached the PropertyChanged event handler first and things like that. Also: What do i do with the CollectionChanged-Events which necessarily also cause a change in the sum of the unread items count? Sounds like more event handling fun. It is such a mess, it would be great if anyone has an elegant solution to this since i don't want the feed-reader to end up as awful as my first attempt years ago when i didn't even know about DataBinding...

    Read the article

  • What does a WinForm application need to be designed for usability, and be robust, clean, and profess

    - by msorens
    One of the principal problems impeding productivity in software implementation is the classic conundrum of “reinventing the wheel”. Of late I am a .NET developer and even the wonderful wizardry of .NET and Visual Studio covers only a portion of this challenging issue. Below I present my initial thoughts both on what is available and what should be available from .NET on a WinForm, focusing on good usability. That is, aspects of an application exposed to the user and making the user experience easier and/or better. (I do include a couple items not visible to the user because I feel strongly about them, such as diagnostics.) I invite you to contribute to these lists. LIST A: Components provided by .NET These are substantially complete components provided by .NET, i.e. those requiring at most trivial coding to use. “About” dialog -- add it with a couple clicks then customize. Persist settings across invocations -- .NET has the support; just use a few lines of code to glue them together. Migrate settings with a new version -- a powerful one, available with one line of code. Tooltips (and infotips) -- .NET includes just plain text tooltips; third-party libraries provide richer ones. Diagnostic support -- TraceSources, TraceListeners, and more are built-in. Internationalization -- support for tailoring your app to languages other than your own. LIST B: Components not provided by .NET These are not supplied at all by .NET or supplied only as rudimentary elements requiring substantial work to be realized. Splash screen -- a small window present during program startup with your logo, loading messages, etc. Tip of the day -- a mini-tutorial presented one bit at a time each time the user starts your app. Check for available updates -- facility to query a server to see if the user is running the latest version of your app, then provide a simple way to upgrade if a new version is found. Maximize to multiple monitors -- the canonical window allows you to maximize to a single monitor only; in my apps I allow maximizing across multiple monitors with a click. Taskbar notifier -- flash the taskbar when your backgrounded app has new info for the user. Options dialogs -- multi-page dialogs letting the user customize the app settings to his/her own preferences. Progress indicator -- for long running operations give the user feedback on how far there is left to go. Memory gauge -- an indicator (either absolute or percentage) of how much memory is used by your app. LIST C: Stylistic and/or tiny bits of functionality This list includes bits of functionality that are too tiny to merit being called a component, along with stylistic concerns (that admittedly do overlap with the Windows User Experience Interaction Guidelines). Design a form for resizing -- unless you are restricting your form to be a fixed size, use anchors and docking so that it does what is reasonable when enlarged or shrunk by the user. Set tab order on a form -- repeated tab presses by the user should advance from field to field in a logical order rather than the default order in which you added fields. Adjust controls to be aware of operating modes -- When starting a background operation with, for example, a “Go” button, disable that “Go” button until the operation completes. Provide access keys for all menu items (per UXGuide). Provide shortcut keys for commonly used menu items (per UXGuide). Set up some (global or important or common) shortcut keys without associating to menu items. Allow some menu items to be invoked with or without modifier keys (shift, control, alt) where the modifier key is useful to vary the operation slightly. Hook up Escape and Enter on child forms to do what is reasonable. Decorate any library classes with documentation-comments and attributes -- this allows Visual Studio to leverage them for Intellisense and property descriptions. Spell check your code! What else would you include?

    Read the article

  • LINQ to Twitter Maintenance Feedback

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/06/16/linq-to-twitter-maintenance-feedback.aspxIt’s always fun to receive positive feedback on your work. If you receive a sufficient amount of positive feedback, you know you’re doing something right. Sometimes, people provide negative feedback too. There are a couple ways to handle it: come back fighting or engage for clarification. The way you handle the negative feedback depends on what your goals are. Feedback Approaches If you know the feedback is incorrect and you need to promote your idea or product, you might want to come back fighting. The feedback might just be comments by a troll or competitor wanting to spread FUD. However, this could be the totally wrong approach if you misjudge the source and intentions of the feedback. In a lot of cases, feedback is a golden opportunity. Sometimes, a problem exists that you either don’t know about or don’t realize the true impact of the problem. If you decide to come back fighting, you might loose the opportunity to learn something new. However, if you engage the person providing the feedback, looking for clarification, you might learn something very important. Negative feedback and it’s clarification can lead to the collection of useful and actionable data. In my case, something that prompted this blog post, I noticed someone who tweeted a negative comment about LINQ to Twitter. Normally, any less than stellar comments are usually from folks that need help – so I help if I can. This was different. I was like “Don’t use LINQ to Twitter”. This is an open source project, the comment didn’t come from a competing project, and  sounded more like an expression of frustration. So I engaged. Not only did the person respond, but I got some decent quality feedback. What’s also interesting is a couple other side conversations sprouted on the subject, which gave me more useful data. LINQ to Twitter Thread Actions Essentially, this particular issue centered around maintenance. There are actually several sub-issues at play here: dependencies, error handling, debugging, and visibility. I’ll describe each one and my interpretation. Dependencies Dependencies are where a library has references to other libraries. This means that when you build your application, you need DLLs for the entire dependency graph for your application. There are several potential problems with this that include more libraries for configuration management, potential versioning mismatches, and lack of cross-platform support. In the early days of LINQ to Twitter, I allowed developers to contribute and add dependencies, but it became very problematic (for reasons stated). It was like a ball and chain that kept me from moving forward. So, I refactored and pulled other open-source into my project to eliminate external dependencies. This lets me fix the code in my project without relying on someone else to upgrade or fix their DLL. The motivation for this was from early negative feedback that translated as important data and acted on it. Today, LINQ to Twitter has zero dependencies. Note: Rejecting good code from community members who worked hard to make your project better is a painful experience in itself. I have to point out that any contribution was not in vain because they had a positive influence on my subsequent refactoring that resulted in a better developer experience. Error Handling Error handling has been a problem in the past. I have this combination of supporting both synchronous and asynchronous (APM) processing that can be complex at times. Within the last 6 months, I did a fair amount of refactoring to detect errors and process them properly. I also refactored TwitterQueryException so it includes important data from Twitter. During this refactoring, I’ve made breaking changes that I felt would improve the development experience (small things like renaming a callback property to Exception, rather than Error). I think the async error handling is much better than it was a year ago. For all the work I’ve done, there is more to do. I think that a combination of more error handling support, e.g. improving semantics, and education through documentation and samples will improve the error handling story. Because of what I’ve done so far, it isn’t bad, but I see opportunities for improvement. Debugging Debugging can be painful. Here’s why: you have multiple layers of technology to navigate and figure out where the real problem is – Twitter API, Security, HTTP, LINQ to Twitter, and application. You can probably add your own nuances to that list, but the point is that debugging in this environment can be complex. I think that my plans for error handling will contribute to making the debugging process easier. However, there’s more I can do in the way of documentation and guidance. Some of the questions to be answered revolve around when something goes wrong, how does the developer figure out that there is a problem, what the problem is, and what to do about it. One example that has gone a long way to helping LINQ to Twitter developers is the 401 FAQ. A 401 Unauthorized is the error that the Twitter API returns when a use isn’t able to authenticate and is one of the most difficult problems faced by LINQ to Twitter developers. What I did was read guidance from Twitter and collect techniques from my own development and actions helping other developers to compile an extensive list of reasons for the 401 and ways to fix the problem. At one time, over half of the questions I answered in the forums were to help solve 401 issues. After publishing the 401 FAQ, I rarely get a 401 question and it’s because the person didn’t know about the FAQ. If the person is too lazy to read the FAQ, that’s not my issue, but the results in support issues have been dramatic. I think debugging can benefit from the education and documentation approach, but I’m always open to suggestions on whatever else I can do. Visibility Visibility is a nuance of the error handling/debugging discussion but is deeply rooted in comfort and control. The questions to ask in this area are what is happening as my code runs and how testable is the code. In support of these areas, LINQ to Twitter does have logging and TwitterContext properties that help see what’s happening on requests. The logging functionality allows any developer to connect a TextWriter to the Log property of TwitterContext to see what’s happening. Further, TwitterContext has a Headers property to see the headers Twitter returns and a RawResults property to show the Json string Twitter returns. From a testing perspective, I’ve been able to write hundreds of unit tests, over 600 when this post is published, and growing. If you write your own library, you have full control over all of these aspects. The tradeoff here is that while you have access to the LINQ to Twitter source code and modify it for all the visibility, LINQ to Twitter *will* change (which is good) and you will have to figure out how to merge that with your changes (which is hard). The fact is that this is a limitation of any 3rd party library, not just LINQ to Twitter. So, it’s a design decision where the tradeoff is between control and productivity. That said, there are things I can do with LINQ to Twitter to make the visibility story more compelling. I think there are opportunities to improve diagnostics. This would be a ton of work because it would need to provide multi-level logging that can be tuned for production and support any logging provider you want to attach. I’ve considered approaches such as how the new Semantic Logging application block connects to Windows Error Reporting as a potential target. Whatever I do would need to be extensible without creating native external dependencies. e.g. how many 3rd party libraries force a dependency on a logging framework that you don’t use. So, this won’t be an easy feat, but I believe it can be part of the roadmap. I think that a lot of developers are unaware of existing visibility features, so the first step would be to provide more documentation and guidance. My thought are that this would lead to more feedback that will help improve this area. Summary Recent feedback highlights some of items that are important to LINQ to Twitter developers, such as dependencies, error handling, debugging, and visibility. I know that there are maintenance issues that have been problems for LINQ to Twitter developers in the past. I’ve done a lot of work in this area, such as improving error handling, adding visibility features, and providing extensive API documentation. That said, there is more to be done to make LINQ to Twitter the best Twitter API experience available for .NET developers and I welcome anyone’s thoughts on what I’ve written here or new improvements. @JoeMayo

    Read the article

  • What's the fastest lookup algorithm for a pair data structure (i.e, a map)?

    - by truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 – 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • Javascript auto calculating

    - by Josh
    I have page that automatically calculates a Total by entering digits into the fields or pressing the Plus or Minus buttons. I need to add a second input after the Total that automatically divides the total by 25. Here is the working code with no JavaScript value for the division part of the code: <html> <head> <script language="text/javascript"> function Calc(className){ var elements = document.getElementsByClassName(className); var total = 0; for(var i = 0; i < elements.length; ++i){ total += parseFloat(elements[i].value); } document.form0.total.value = total; } function addone(field) { field.value = Number(field.value) + 1; Calc('add'); } function subtractone(field) { field.value = Number(field.value) - 1; Calc('add'); } </script> </head> <body> <form name="form0" id="form0"> 1: <input type="text" name="box1" id="box1" class="add" value="0" onKeyUp="Calc('add')" onChange="updatesum()" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box1);"> <input type="button" value=" - " onclick="subtractone(box1);"> <br /> 2: <input type="text" name="box2" id="box2" class="add" value="0" onKeyUp="Calc('add')" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box2);"> <input type="button" value=" - " onclick="subtractone(box2);"> <br /> 3: <input type="text" name="box3" id="box3" class="add" value="0" onKeyUp="Calc('add')" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box3);"> <input type="button" value=" - " onclick="subtractone(box3);"> <br /> <br /> Total: <input readonly style="border:0px; font-size:14; color:red;" id="total" name="total"> <br /> Totaly Divided by 25: <input readonly style="border:0px; font-size:14; color:red;" id="divided" name="divided"> </form> </body></html> I have the right details but the formulas I am trying completely break other aspects of the code. I cant figure out how to make the auto adding and auto dividing work at the same time

    Read the article

  • Getting Started Building Windows 8 Store Apps with XAML/C#

    - by dwahlin
    Technology is fun isn’t it? As soon as you think you’ve figured out where things are heading a new technology comes onto the scene, changes things up, and offers new opportunities. One of the new technologies I’ve been spending quite a bit of time with lately is Windows 8 store applications. I posted my thoughts about Windows 8 during the BUILD conference in 2011 and still feel excited about the opportunity there. Time will tell how well it ends up being accepted by consumers but I’m hopeful that it’ll take off. I currently have two Windows 8 store application concepts I’m working on with one being built in XAML/C# and another in HTML/JavaScript. I really like that Microsoft supports both options since it caters to a variety of developers and makes it easy to get started regardless if you’re a desktop developer or Web developer. Here’s a quick look at how the technologies are organized in Windows 8: In this post I’ll focus on the basics of Windows 8 store XAML/C# apps by looking at features, files, and code provided by Visual Studio projects. To get started building these types of apps you’ll definitely need to have some knowledge of XAML and C#. Let’s get started by looking at the Windows 8 store project types available in Visual Studio 2012.   Windows 8 Store XAML/C# Project Types When you open Visual Studio 2012 you’ll see a new entry under C# named Windows Store. It includes 6 different project types as shown next.   The Blank App project provides initial starter code and a single page whereas the Grid App and Split App templates provide quite a bit more code as well as multiple pages for your application. The other projects available can be be used to create a class library project that runs in Windows 8 store apps, a WinRT component such as a custom control, and a unit test library project respectively. If you’re building an application that displays data in groups using the “tile” concept then the Grid App or Split App project templates are a good place to start. An example of the initial screens generated by each project is shown next: Grid App Split View App   When a user clicks a tile in a Grid App they can view details about the tile data. With a Split View app groups/categories are shown and when the user clicks on a group they can see a list of all the different items and then drill-down into them:   For the remainder of this post I’ll focus on functionality provided by the Blank App project since it provides a simple way to get started learning the fundamentals of building Windows 8 store apps.   Blank App Project Walkthrough The Blank App project is a great place to start since it’s simple and lets you focus on the basics. In this post I’ll focus on what it provides you out of the box and cover additional details in future posts. Once you have the basics down you can move to the other project types if you need the functionality they provide. The Blank App project template does exactly what it says – you get an empty project with a few starter files added to help get you going. This is a good option if you’ll be building an app that doesn’t fit into the grid layout view that you see a lot of Windows 8 store apps following (such as on the Windows 8 start screen). I ended up starting with the Blank App project template for the app I’m currently working on since I’m not displaying data/image tiles (something the Grid App project does well) or drilling down into lists of data (functionality that the Split App project provides). The Blank App project provides images for the tiles and splash screen (you’ll definitely want to change these), a StandardStyles.xaml resource dictionary that includes a lot of helpful styles such as buttons for the AppBar (a special type of menu in Windows 8 store apps), an App.xaml file, and the app’s main page which is named MainPage.xaml. It also adds a Package.appxmanifest that is used to define functionality that your app requires, app information used in the store, plus more. The App.xaml, App.xaml.cs and StandardStyles.xaml Files The App.xaml file handles loading a resource dictionary named StandardStyles.xaml which has several key styles used throughout the application: <Application x:Class="BlankApp.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:BlankApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <!-- Styles that define common aspects of the platform look and feel Required by Visual Studio project and item templates --> <ResourceDictionary Source="Common/StandardStyles.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application>   StandardStyles.xaml has style definitions for different text styles and AppBar buttons. If you scroll down toward the middle of the file you’ll see that many AppBar button styles are included such as one for an edit icon. Button styles like this can be used to quickly and easily add icons/buttons into your application without having to be an expert in design. <Style x:Key="EditAppBarButtonStyle" TargetType="ButtonBase" BasedOn="{StaticResource AppBarButtonStyle}"> <Setter Property="AutomationProperties.AutomationId" Value="EditAppBarButton"/> <Setter Property="AutomationProperties.Name" Value="Edit"/> <Setter Property="Content" Value="&#xE104;"/> </Style> Switching over to App.xaml.cs, it includes some code to help get you started. An OnLaunched() method is added to handle creating a Frame that child pages such as MainPage.xaml can be loaded into. The Frame has the same overall purpose as the one found in WPF and Silverlight applications - it’s used to navigate between pages in an application. /// <summary> /// Invoked when the application is launched normally by the end user. Other entry points /// will be used when the application is launched to open a specific file, to display /// search results, and so forth. /// </summary> /// <param name="args">Details about the launch request and process.</param> protected override void OnLaunched(LaunchActivatedEventArgs args) { Frame rootFrame = Window.Current.Content as Frame; // Do not repeat app initialization when the Window already has content, // just ensure that the window is active if (rootFrame == null) { // Create a Frame to act as the navigation context and navigate to the first page rootFrame = new Frame(); if (args.PreviousExecutionState == ApplicationExecutionState.Terminated) { //TODO: Load state from previously suspended application } // Place the frame in the current Window Window.Current.Content = rootFrame; } if (rootFrame.Content == null) { // When the navigation stack isn't restored navigate to the first page, // configuring the new page by passing required information as a navigation // parameter if (!rootFrame.Navigate(typeof(MainPage), args.Arguments)) { throw new Exception("Failed to create initial page"); } } // Ensure the current window is active Window.Current.Activate(); }   Notice that in addition to creating a Frame the code also checks to see if the app was previously terminated so that you can load any state/data that the user may need when the app is launched again. If you’re new to the lifecycle of Windows 8 store apps the following image shows how an app can be running, suspended, and terminated.   If the user switches from an app they’re running the app will be suspended in memory. The app may stay suspended or may be terminated depending on how much memory the OS thinks it needs so it’s important to save state in case the application is ultimately terminated and has to be started fresh. Although I won’t cover saving application state here, additional information can be found at http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh465099.aspx. Another method in App.xaml.cs named OnSuspending() is also included in App.xaml.cs that can be used to store state as the user switches to another application:   /// <summary> /// Invoked when application execution is being suspended. Application state is saved /// without knowing whether the application will be terminated or resumed with the contents /// of memory still intact. /// </summary> /// <param name="sender">The source of the suspend request.</param> /// <param name="e">Details about the suspend request.</param> private void OnSuspending(object sender, SuspendingEventArgs e) { var deferral = e.SuspendingOperation.GetDeferral(); //TODO: Save application state and stop any background activity deferral.Complete(); } The MainPage.xaml and MainPage.xaml.cs Files The Blank App project adds a file named MainPage.xaml that acts as the initial screen for the application. It doesn’t include anything aside from an empty <Grid> XAML element in it. The code-behind class named MainPage.xaml.cs includes a constructor as well as a method named OnNavigatedTo() that is called once the page is displayed in the frame.   /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } /// <summary> /// Invoked when this page is about to be displayed in a Frame. /// </summary> /// <param name="e">Event data that describes how this page was reached. The Parameter /// property is typically used to configure the page.</param> protected override void OnNavigatedTo(NavigationEventArgs e) { } }   If you’re experienced with XAML you can switch to Design mode and start dragging and dropping XAML controls from the ToolBox in Visual Studio. If you prefer to type XAML you can do that as well in the XAML editor or while in split mode. Many of the controls available in WPF and Silverlight are included such as Canvas, Grid, StackPanel, and Border for layout. Standard input controls are also included such as TextBox, CheckBox, PasswordBox, RadioButton, ComboBox, ListBox, and more. MediaElement is available for rendering video or playing audio files. Some of the “common” XAML controls included out of the box are shown next:   Although XAML/C# Windows 8 store apps don’t include all of the functionality available in Silverlight 5, the core functionality required to build store apps is there with additional functionality available in open source projects such as Callisto (started by Microsoft’s Tim Heuer), Q42.WinRT, and others. Standard XAML data binding can be used to bind C# objects to controls, converters can be used to manipulate data during the data binding process, and custom styles and templates can be applied to controls to modify them. Although Visual Studio 2012 doesn’t support visually creating styles or templates, Expression Blend 5 handles that very well. To get started building the initial screen of a Windows 8 app you can start adding controls as mentioned earlier. Simply place them inside of the <Grid> element that’s included. You can arrange controls in a stacked manner using the StackPanel control, add a border around controls using the Border control, arrange controls in columns and rows using the Grid control, or absolutely position controls using the Canvas control. One of the controls that may be new to you is the AppBar. It can be used to add menu/toolbar functionality into a store app and keep the app clean and focused. You can place an AppBar at the top or bottom of the screen. A user on a touch device can swipe up to display the bottom AppBar or right-click when using a mouse. An example of defining an AppBar that contains an Edit button is shown next. The EditAppBarButtonStyle is available in the StandardStyles.xaml file mentioned earlier. <Page.BottomAppBar> <AppBar x:Name="ApplicationAppBar" Padding="10,0,10,0" AutomationProperties.Name="Bottom App Bar"> <Grid> <StackPanel x:Name="RightPanel" Orientation="Horizontal" Grid.Column="1" HorizontalAlignment="Right"> <Button x:Name="Edit" Style="{StaticResource EditAppBarButtonStyle}" Tag="Edit" /> </StackPanel> </Grid> </AppBar> </Page.BottomAppBar> Like standard XAML controls, the <Button> control in the AppBar can be wired to an event handler method in the MainPage.Xaml.cs file or even bound to a ViewModel object using “commanding” if your app follows the Model-View-ViewModel (MVVM) pattern (check out the MVVM Light package available through NuGet if you’re using MVVM with Windows 8 store apps). The AppBar can be used to navigate to different screens, show and hide controls, display dialogs, show settings screens, and more.   The Package.appxmanifest File The Package.appxmanifest file contains configuration details about your Windows 8 store app. By double-clicking it in Visual Studio you can define the splash screen image, small and wide logo images used for tiles on the start screen, orientation information, and more. You can also define what capabilities the app has such as if it uses the Internet, supports geolocation functionality, requires a microphone or webcam, etc. App declarations such as background processes, file picker functionality, and sharing can also be defined Finally, information about how the app is packaged for deployment to the store can also be defined. Summary If you already have some experience working with XAML technologies you’ll find that getting started building Windows 8 applications is pretty straightforward. Many of the controls available in Silverlight and WPF are available making it easy to get started without having to relearn a lot of new technologies. In the next post in this series I’ll discuss additional features that can be used in your Windows 8 store apps.

    Read the article

  • Ops Center 12c - Provisioning Solaris Using a Card-Based NIC

    - by scottdickson
    It's been a long time since last I added something here, but having some conversations this last week, I got inspired to update things. I've been spending a lot of time with Ops Center for managing and installing systems these days.  So, I suspect a number of my upcoming posts will be in that area. Today, I want to look at how to provision Solaris using Ops Center when your network is not connected to one of the built-in NICs.  We'll talk about how this can work for both Solaris 10 and Solaris 11, since they are pretty similar.  In both cases, WANboot is a key piece of the story. Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11.  I have met with a lot of customers lately who have a similar architecture.  Usually, they have T4-4 servers with the network connected via 10GbE connections. Add to this mix the fact that I use Ops Center to manage the systems in my lab, so I really would like to add this to Ops Center.  If possible, I would like this to be completely hands free.  I can't quite do that yet. Close, but not quite. WANBoot or Old-Style NetBoot? When a system is installed from the network, it needs some help getting the process rolling.  It has to figure out what its network configuration (IP address, gateway, etc.) ought to be.  It needs to figure out what server is going to help it boot and install, and it needs the instructions for the installation.  There are two different ways to bootstrap an installation of Solaris on SPARC across the network.   The old way uses a broadcast of RARP or more recently DHCP to obtain the IP configuration and the rest of the information needed.  The second is to explicitly configure this information in the OBP and use WANBoot for installation WANBoot has a number of benefits over broadcast-based installation: it is not restricted to a single subnet; it does not require special DHCP configuration or DHCP helpers; it uses standard HTTP and HTTPS protocols which traverse firewalls much more easily than NFS-based package installation.  But, WANBoot is not available on really old hardware and WANBoot requires the use o Flash Archives in Solaris 10.  Still, for many people, this is a great approach. As it turns out, WANBoot is necessary if you plan to install using a NIC on a card rather than a built-in NIC. Identifying Which Network Interface to Use One of the trickiest aspects to this process, and the one that actually requires manual intervention to set up, is identifying how the OBP and Solaris refer to the NIC that we want to use to boot.  The OBP already has device aliases configured for the built-in NICs called net, net0, net1, net2, net3.  The device alias net typically points to net0 so that when you issue the command  "boot net -v install", it uses net0 for the boot.  Our task is to figure out the network instance for the NIC we want to use.  We will need to get to the OBP console of the system we want to install in order to figure out what the network should be called.  I will presume you know how to get to the ok prompt.  Once there, we have to see what networks the OBP sees and identify which one is associated with our NIC using the OBP command show-nets. SunOS Release 5.11 Version 11.0 64-bit Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. {4} ok banner Sun Fire T200, No Keyboard Copyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548. Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c. {4} ok show-nets a) /pci@7c0/pci@0/pci@2/network@0,1 b) /pci@7c0/pci@0/pci@2/network@0 c) /pci@780/pci@0/pci@8/network@0,3 d) /pci@780/pci@0/pci@8/network@0,2 e) /pci@780/pci@0/pci@8/network@0,1 f) /pci@780/pci@0/pci@8/network@0 g) /pci@780/pci@0/pci@1/network@0,1 h) /pci@780/pci@0/pci@1/network@0 q) NO SELECTION Enter Selection, q to quit: d /pci@780/pci@0/pci@8/network@0,2 has been selected. Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y for creating devalias mydev for /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias ... net3 /pci@7c0/pci@0/pci@2/network@0,1 net2 /pci@7c0/pci@0/pci@2/network@0 net1 /pci@780/pci@0/pci@1/network@0,1 net0 /pci@780/pci@0/pci@1/network@0 net /pci@780/pci@0/pci@1/network@0 ... name aliases By looking at the devalias and the show-nets output, we can see that our Quad-GbE card must be the device nodes starting with  /pci@780/pci@0/pci@8/network@0.  The cable for our network is plugged into the 3rd slot, so the device address for our network must be /pci@780/pci@0/pci@8/network@0,2. With that, we can create a device alias for our network interface.  Naming the device alias may take a little bit of trial and error, especially in Solaris 11 where the device alias seems to matter more with the new virtualized network stack. So far in my testing, since this is the "next" network interface to be used, I have found success in naming it net4, even though it's a NIC in the middle of a card that might, by rights, be called net6 (assuming the 0th interface on the card is the next interface identified by Solaris and this is the 3rd interface on the card).  So, we will call it net4.  We need to assign a device alias to it: {4} ok nvalias net4 /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias net4 /pci@780/pci@0/pci@8/network@0,2 ... We also may need to have the MAC for this particular interface, so let's get it, too.  To do this, we go to the device and interrogate its properties. {4} ok cd /pci@780/pci@0/pci@8/network@0,2 {4} ok .properties assigned-addresses 82060210 00000000 03000000 00000000 01000000 82060218 00000000 00320000 00000000 00008000 82060220 00000000 00328000 00000000 00008000 82060230 00000000 00600000 00000000 00100000 local-mac-address 00 21 28 20 42 92 phy-type mif ... From this, we can see that the MAC for this interface is  00:21:28:20:42:92.  We will need this later. This is all we need to do at the OBP.  Now, we can configure Ops Center to use this interface. Network Boot in Solaris 10 Solaris 10 turns out to be a little simpler than Solaris 11 for this sort of a network boot.  Since WANBoot in Solaris 10 fetches a specified In order to install the system using Ops Center, it is necessary to create a OS Provisioning profile and its corresponding plan.  I am going to presume that you already know how to do this within Ops Center 12c and I will just cover the differences between a regular profile and a profile that can use an alternate interface. Create a OS Provisioning profile for Solaris 10 as usual.  However, when you specify the network resources for the primary network, click on the name of the NIC, probably GB_0, and rename it to GB_N/netN, where N is the instance number you used previously in creating the device alias.  This is where the trial and error may come into play.  You may need to try a few instance numbers before you, the OBP, and Solaris all agree on the instance number.  Mark this as the boot network. For Solaris 10, you ought to be able to then apply the OS Provisioning profile to the server and it should install using that interface.  And if you put your cards in the same slots and plug the networks into the same NICs, this profile is reusable across multiple servers. Why This Works If you watch the console as Solaris boots during the OSP process, Ops Center is going to look for the device alias netN.  Since WANBoot requires a device alias called just net, Ops Center uses the value of your netN device alias and assigns that device to the net alias.  That means that boot net will automatically use this device.  Very cool!  Here's a trace from the console as Ops Center provisions a server: Sun Sun Fire T200, No KeyboardCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548.Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c.auto-boot? =            false{0} ok  {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2 See what happened?  Ops Center looked for the network device alias called net4 that we specified in the profile, took the value from it, and made it the net device alias for the boot.  Pretty cool! WANBoot and Solaris 11 Solaris 11 requires an additional step since the Automated Installer in Solaris 11 uses the MAC address of the network to figure out which manifest to use for system installation.  In order to make sure this is available, we have to take an extra step to associate the MAC of the NIC on the card with the host.  So, in addition to creating the device alias like we did above, we also have to declare to Ops Center that the host has this new MAC. Declaring the NIC Start out by discovering the hardware as usual.  Once you have discovered it, take a look under the Connectivity tab to see what networks it has discovered.  In the case of this system, it shows the 4 built-in networks, but not the networks on the additional cards.  These are not directly visible to the system controller.  In order to add the additional network interface to the hardware asset, it is necessary to Declare it.  We will declare that we have a server with this additional NIC, but we will also  specify the existing GB_0 network so that Ops Center can associate the right resources together.  The GB_0 acts as sort of a key to tie our new declaration to the old system already discovered.  Go to the Assets tab, select All Assets, and then in the Actions tab, select Add Asset.  Rather than going through a discovery this time, we will manually declare a new asset. When we declare it, we will give the hostname, IP address, system model that match those that have already been discovered.  Then, we will declare both GB_0 with its existing MAC and the new GB_4 with its MAC.  Remember that we collected the MAC for GB_4 when we created its device alias. After you declare the asset, you will see the new NIC in the connectivity tab for the asset.  You will notice that only the NICs you listed when you declared it are seen now.  If you want Ops Center to see all of the existing NICs as well as the additional one, declare them as well.  Add the other GB_1, GB_2, GB_3 links and their MACs just as you did GB_0 and GB_4.  Installing the OS  Once you have declared the asset, you can create an OS Provisioning profile for Solaris 11 in the same way that you did for Solaris 10.  The only difference from any other provisioning profile you might have created already is the network to use for installation.  Again, use GB_N/netN where N is the interface number you used for your device alias and in your declaration.  And away you go.  When the system boots from the network, the automated installer (AI) is able to see which system manifest to use, based on the new MAC that was associated, and the system gets installed. {0} ok {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2...SunOS Release 5.11 Version 11.0 64-bitCopyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.Remounting root read/writeProbing for device nodes ...Preparing network image for useDownloading solaris.zlib--2012-02-17 15:10:17--  http://10.140.204.22:5555/var/js/AI/sparc//solaris.zlibConnecting to 10.140.204.22:5555... connected.HTTP request sent, awaiting response... 200 OKLength: 126752256 (121M) [text/plain]Saving to: `/tmp/solaris.zlib'100%[======================================>] 126,752,256 28.6M/s   in 4.4s    2012-02-17 15:10:21 (27.3 MB/s) - `/tmp/solaris.zlib' saved [126752256/126752256] Conclusion So, why go to all of this trouble?  More and more, I find that customers are wiring their data center to only use higher speed networks - 10GbE only to the hosts.  Some customers are moving aggressively toward consolidated networks combining storage and network on CNA NICs.  All of this means that network-based provisioning cannot rely exclusively on the built-in network interfaces.  So, it's important to be able to provision a system using other than the built-in networks.  Turns out, that this is pretty straight-forward for both Solaris 10 and Solaris 11 and fits into the Ops Center deployment process quite nicely. Hopefully, you will be able to use this as you build out your own private cloud solutions with Ops Center.

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • CodePlex Daily Summary for Tuesday, February 08, 2011

    CodePlex Daily Summary for Tuesday, February 08, 2011Popular ReleasesyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Supports YouTube's new layout Complete internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details.fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroEnhSim: EnhSim 2.3.6 BETA: 2.3.6 BETAThis release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 ...TestApi - a library of Test APIs: TestApi v0.6: TestApi v0.6 comes with the following changes: TestApi code development has been moved to Codeplex: Moved TestApi soluton to VS 2010; Moved all source code to Codeplex. All development work is done there now. Fault Injection API: Integrated the unmanaged FaultInjectionEngine.dll COM component in the build; Cleaned up FaultInjectionEngine.dll to build at warning level 4; Implemented “FaultScope” which allows for in-process fault injection; Added automation scripts & sample program; ...AutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...Pyxis 2: Production Release: Pyxis 2.0.0.13 - Full Production Release This release of Pyxis 2 offers you a wide range of features: Launch Applications in their own threads & domains Render alpha-blended icons on the desktop Support for SD & USB drives Online App Store Dynamic & Static IP support Menus & Modals Over a dozen GUI controls File selection dialogs Folder selection dialog Application, Bootloader, and Firmware Updating Update Release Notes Much More!Microsoft All-In-One Code Framework: Sample Browser v2 (CTP Release): Sample Browser v2 (CTP Release) http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=205917MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.Finestra Virtual Desktops: 1.0: Finally the version 1.0 release! Sorry for the long delay since the last release, but I think that you'll find this release to be really smooth, really stable, and a really great enhancement to Windows. New features include: Windows 7 taskbar integration Major performance and usability improvements Redesigned look and feel New name: Finestra Better automatic updating Much faster full-screen switcher Fixes Windows 7 hotkey collisions by default Updated installerFacebook C# SDK: 5.0.2 (BETA): PLEASE TAKE A FEW MINUTES TO GIVE US SOME FEEDBACK: Facebook C# SDK Survey This is third BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. This release contains some breaking changes. Particularly with authentication. After spending time reviewing the trouble areas that people are having using th...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 3.0.0 for MVC3: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. ChangelogTargeting ASP.NET MVC 3 and .NET 4.0 Additional UpdatePriority options for generating XML sitemaps Allow to specify target on SiteMapTitleAttribute One action with multiple routes and breadcrumbs Medium Trust optimizations Create SiteMapTitleAttribute for setting parent title IntelliSense for your sitemap with MvcSiteMapSchem...patterns & practices SharePoint Guidance: SharePoint Guidance 2010 Hands On Lab: SharePoint Guidance 2010 Hands On Lab consists of six labs: one for logging, one for service location, and four for application setting manager. Each lab takes about 20 minutes to walk through. Each lab consists of a PDF document. You can go through the steps in the doc to create solution and then build/deploy the solution and run the lab. For those of you who wants to save the time, we included the final solution so you can just build/deploy the solution and run the lab.Value Injecter - object(s) to -> object mapper: 2.3: it lets you define your own convention-based matching algorithms (ValueInjections) in order to match up (inject) source values to destination values. inject from multiple sources in one InjectFrom added ConventionInjectionMobile Device Detection and Redirection: 0.1.11.11: Improvements to Beta Release The following changes have been made in version 0.1.11.11: BlackBerry Version 6 devices (such as the 9800 Torch) are now correctly identified with a dedicated handler. Android powered devices are now correctly identified. Minor change to Provider.cs to improve performance and optimise data sent to 51Degrees.mobi if the option is enabled. GC.collect is no longer called at any point. All garbage collection now happens automatically IMPORTANT CHANGES This rele...TweetSharp: TweetSharp v2.0.0.0 - Preview 10: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 9 ChangesAdded support for trends Added support for Silverlight 4 Elevated WP7 fixes Third Party Library VersionsHammock v1.1.7: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comNew Projectsbisolu_sendus: bisolu smsBlack & Scholes: Black & Scholes OptionsCosLabs: CosLabs makes it easy to see the full capabilities of the Cosmos operating system toolkit. CosLabs has a number of experiments to find new and unique uses for Cosmos. It's developed in C#.DeployToAzure: DeployToAzure allows automating deployment of Windows Azure project and making it a part of TFS 2010 build process without using PowerShell and Azure Management CmdLets. Digital: Educational purposesDiscogsNet: DiscogsNet is a .NET library to query the Discogs.com API and parse the Discogs XML data dump files. It's developed in C# and usable from any .NET language. The API of the library is focused on ease of use and intuitiveness.DJME - The jQuery extensions for ASP.NET MVC: DJME - The jQuery extensions for ASP.NET MVC is a lightweight framework which helps you build rich user interfaces for ASP.NET MVC while enjoying great developer productivity. DokuDB: Visio AddIn to document different versions of a databaseDtsConfig Explorer: This project is a windows froms application that helps explore a SSIS dtsconfig file with an easy wayEveryDNS Service: Windows service to automatically send your current IP address to everydns.net for dynamic domains. Automatically monitors and sends IP address changes without having to be logged in. The project is built in C# targeting the .NET 4.0 framework.EvoSim - Evolution Simulation: EvoSim is a pet project to learn aspects of MVVM, WPF/Silverlight, and Parallel Processing features of .NET 4. The goal is to simulate a world filled with creatures that move, eat, reproduce, and die according to Darwinian evolutionary principles. It is tile and turn based.EZStorage: A storage library for XNA based on EasyStorage but abstracting in an XML based file list for each folder, allowing file listings on all folders and details like file creation dates which are no longer possible in XNA 4.0ForeverBell's Snake: A snake game written in VB.NET.Glauca Browser: This is a browser with a webkit core inside.Grauers SharePoint 2010 Custom MasterPage Feature: This is a SharePoint 2010 Custom MastePage feature. Custom Master and custom CSS Blog post about the project: http://www.grauers.net/archive/2011/02/07/build-a-sharepoint-2010-custom-mastepage-with-visual-studio-2010.aspxHamming Code Sample: HamCode demonstrate how the hamming code work, result of coded and decoded data. Shows the benefits of hamming method/codeHydroDesktop Mercurial Test: This is a trial version of the HydroDesktop project (hydrodesktop.codeplex.com) running on Mercurial source code repository. We first want to test whether the data update/download is happening fine before switching the official HydroDesktop website to the new system.LateBindingApi: Creates .Net Proxy Components from COM Type Libraries.Middlesex County College Library Wireless Auto Login: The Middlesex County College Library Wireless Auto Login automatically logs a computer in to the Middlesex County College (Edison, NJ) Library's WifiMobility: Mobility is a small program that reads from a text file. The text file includes everything about the program, right down to that cute little bunny on your desktop! Try Mobility today!OpenMFC: OpenMFC is opensource version of MFC for using with C/C++ compiler without MFC.Orchard Content Sharing: This Orchard module adds content sharing functionality via integration with AddThis.com sharing service.Orchard Wunder Weather Widget: This project is used to maintain the source code for the Wunder Weather widget in the Orchard gallery.Professional Audio Recorder: Professional Audio Recorder is a Audio Recorder for Windows Phone 7Python Node Info for Umbraco: Designed to assist Umbraco macro authors who are writing their macros in Python. Provides a helper macro that, when inserted into a page, will enable the display of an info panel showing the page node properties as represented in Python.Softina.Graphs: Softina.Graphs is a graph managment and algorithm utility. It includes a set of libraries to work with graph processing. It is written using .net framework with c# language and MS Visual Studio 2008. Client applications is created using devexpress components.spangesharp: Application to help people learn c#Visual Studio Private Extension Gallery: Add a new tab in the Visual Studio extension manager to manage private extensions.WCF Home Framework: Home Framework is a service-oriented framework able to facilitate deploy and management of wcf services in a mid-size scenario project.Windows Phone 7 Video Player: This project contains all the source of an application for Windows Phone 7 to consume an RSS Media exposed by our Smooth Streaming Video Player plugin for WordPress (http://smooth.codeplex.com).WOL Shopping List: This is a shoppingList version of WOL's NearMeWP7RSSReader: Updated RSSReader project for Windows Phone 7 using the RTM tools with the October 2010 update.Xen (XNA Extended) Framework: The Xen Framework is a set of libraries that provides and extends XNA 4.0 functionality to make game development easier and faster while producing clean, maintainable code. Xen frees you up to spend more time building your game.

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • EM12c Release 4: New Compliance features including DB STIG Standard

    - by DaveWolf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Enterprise Manager’s compliance framework is a powerful and robust feature that provides users the ability to continuously validate their target configurations against a specified standard. Enterprise Manager’s compliance library is filled with a wide variety of standards based on Oracle’s recommendations, best practices and security guidelines. These standards can be easily associated to a target to generate a report showing its degree of conformance to that standard. ( To get an overview of  Database compliance management in Enterprise Manager see this screenwatch. ) Starting with release 12.1.0.4 of Enterprise Manager the compliance library will contain a new standard based on the US Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Oracle Database 11g. According to the DISA website, “The STIGs contain technical guidance to ‘lock down’ information systems/software that might otherwise be vulnerable to a malicious computer attack.” In essence, a STIG is a technical checklist an administrator can follow to secure a system or software. Many US government entities are required to follow these standards however many non-US government entities and commercial companies base their standards directly or partially on these STIGs. You can find more information about the Oracle Database and other STIGs on the DISA website. The Oracle Database 11g STIG consists of two categories of checks, installation and instance. Installation checks focus primarily on the security of the Oracle Home while the instance checks focus on the configuration of the running database instance itself. If you view the STIG compliance standard in Enterprise Manager, you will see the rules organized into folders corresponding to these categories. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The rule names contain a rule ID ( DG0020 for example ) which directly map to the check name in the STIG checklist along with a helpful brief description. The actual description field contains the text from the STIG documentation to aid in understanding the purpose of the check. All of the rules have also been documented in the Oracle Database Compliance Standards reference documentation. In order to use this standard both the OMS and agent must be at version 12.1.0.4 as it takes advantage of several features new in this release including: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Manual Compliance Rules Violation Suppression Additional BI Publisher Compliance Reports /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-side compliance rules are essentially the result of a tighter integration between Configuration Extensions and Compliance Rules. If you ever created customer compliance content in past versions of Enterprise Manager, you likely used Configuration Extensions to collect additional information into the EM repository so it could be used in a Repository compliance rule. This process although powerful, could be confusing to correctly model the SQL in the rule creation wizard. With agent-side rules, the user only needs to choose the Configuration Extension/Alias combination and that’s it. Enterprise Manager will do the rest for you. This tighter integration also means their lifecycle is managed together. When you associate an agent-side compliance standard to a target, the required Configuration Extensions will be deployed automatically for you. The opposite is also true, when you unassociated the compliance standard, the Configuration Extensions will also be undeployed. The Oracle Database STIG compliance standard is implemented as an agent-side standard which is why you simply need to associate the standard to your database targets without previously deploying the associated Configuration Extensions. You can learn more about using Agent-Side compliance rules in the screenwatch Using Agent-Side Compliance Rules on Enterprise Manager's Lifecycle Management page on OTN. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Manual Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There are many checks in the Oracle Database STIG as well as other common standards which simply cannot be automated. This could be something as simple as “Ensure the datacenter entrance is secured.” or complex as Oracle Database STIG Rule DG0186 – “The database should not be directly accessible from public or unauthorized networks”. These checks require a human to perform and attest to its successful completion. Enterprise Manager now supports these types of checks in Manual rules. When first associated to a target, each manual rule will generate a single violation. These violations must be manually cleared by a user who is in essence attesting to its successful completion. The user is able to permanently clear the violation or give a future date on which the violation will be regenerated. Setting a future date is useful when policy dictates a periodic re-validation of conformance wherein the user will have to reperform the check. The optional reason field gives the user an opportunity to provide details of the check results. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Violation Suppression There are situations that require the need to permanently or temporarily suppress a legitimate violation or finding. These include approved exceptions and grace periods. Enterprise Manager now supports the ability to temporarily or permanently suppress a violation. Unlike when you clear a manual rule violation, suppression simply removes the violation from the compliance results UI and in turn its negative impact on the score. The violation still remains in the EM repository and can be accounted for in compliance reports. Temporarily suppressing a violation can give users a grace period in which to address an issue. If the issue is not addressed within the specified period, the violation will reappear in the results automatically. Again the user may enter a reason for the suppression which will be permanently saved with the event along with the suppressing user ID. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Additional BI Publisher compliance reports As I am sure you have learned by now, BI Publisher now ships and is integrated with Enterprise Manager 12.1.0.4. This means users can take full advantage of the powerful reporting engine by using the Oracle provided reports or building their own. There are many new compliance related reports available in 12.1.0.4 covering all aspects including the association status, library as well as summary and detailed results reports.  10 New Compliance Reports Compliance Summary Report Example showing STIG results Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Conclusion Together with the Oracle Database 11g STIG compliance standard these features provide a complete solution for easily auditing and reporting the security posture of your Oracle Databases against this well known benchmark. You can view an overview presentation and demo in the screenwatch Using the STIG Compliance Standard on Enterprise Manager's Lifecycle Management page on OTN. Additional EM12c Compliance Management Information Compliance Management - Overview ( Presentation ) Compliance Management - Custom Compliance on Default Data (How To) Compliance Management - Custom Compliance using SQL Configuration Extension (How To) Compliance Management - Customer Compliance using Command Configuration Extension (How To)

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • How to Run Low-Cost Minecraft on a Raspberry Pi for Block Building on the Cheap

    - by Jason Fitzpatrick
    We’ve shown you how to run your own blocktastic personal Minecraft server on a Windows/OSX box, but what if you crave something lighter weight, more energy efficient, and always ready for your friends? Read on as we turn a tiny Raspberry Pi machine into a low-cost Minecraft server you can leave on 24/7 for around a penny a day. Why Do I Want to Do This? There’s two aspects to this tutorial, running your own Minecraft server and specifically running that Minecraft server on a Raspberry Pi. Why would you want to run your own Minecraft server? It’s a really great way to extend and build upon the Minecraft play experience. You can leave the server running when you’re not playing so friends and family can join and continue building your world. You can mess around with game variables and introduce mods in a way that isn’t possible when you’re playing the stand-alone game. It also gives you the kind of control over your multiplayer experience that using public servers doesn’t, without incurring the cost of hosting a private server on a remote host. While running a Minecraft server on its own is appealing enough to a dedicated Minecraft fan, running it on the Raspberry Pi is even more appealing. The tiny little Pi uses so little resources that you can leave your Minecraft server running 24/7 for a couple bucks a year. Aside from the initial cost outlay of the Pi, an SD card, and a little bit of time setting it up, you’ll have an always-on Minecraft server at a monthly cost of around one gumball. What Do I Need? For this tutorial you’ll need a mix of hardware and software tools; aside from the actual Raspberry Pi and SD card, everything is free. 1 Raspberry Pi (preferably a 512MB model) 1 4GB+ SD card This tutorial assumes that you have already familiarized yourself with the Raspberry Pi and have installed a copy of the Debian-derivative Raspbian on the device. If you have not got your Pi up and running yet, don’t worry! Check out our guide, The HTG Guide to Getting Started with Raspberry Pi, to get up to speed. Optimizing Raspbian for the Minecraft Server Unlike other builds we’ve shared where you can layer multiple projects over one another (e.g. the Pi is more than powerful enough to serve as a weather/email indicator and a Google Cloud Print server at the same time) running a Minecraft server is a pretty intense operation for the little Pi and we’d strongly recommend dedicating the entire Pi to the process. Minecraft seems like a simple game, with all its blocky-ness and what not, but it’s actually a pretty complex game beneath the simple skin and required a lot of processing power. As such, we’re going to tweak the configuration file and other settings to optimize Rasbian for the job. The first thing you’ll need to do is dig into the Raspi-Config application to make a few minor changes. If you’re installing Raspbian fresh, wait for the last step (which is the Raspi-Config), if you already installed it, head to the terminal and type in “sudo raspi-config” to launch it again. One of the first and most important things we need to attend to is cranking up the overclock setting. We need all the power we can get to make our Minecraft experience enjoyable. In Raspi-Config, select option number 7 “Overclock”. Be prepared for some stern warnings about overclocking, but rest easy knowing that overclocking is directly supported by the Raspberry Pi foundation and has been included in the configuration options since late 2012. Once you’re in the actual selection screen, select “Turbo 1000MhHz”. Again, you’ll be warned that the degree of overclocking you’ve selected carries risks (specifically, potential corruption of the SD card, but no risk of actual hardware damage). Click OK and wait for the device to reset. Next, make sure you’re set to boot to the command prompt, not the desktop. Select number 3 “Enable Boot to Desktop/Scratch”  and make sure “Console Text console” is selected. Back at the Raspi-Config menu, select number 8 “Advanced Options’. There are two critical changes we need to make in here and one option change. First, the critical changes. Select A3 “Memory Split”: Change the amount of memory available to the GPU to 16MB (down from the default 64MB). Our Minecraft server is going to ruin in a GUI-less environment; there’s no reason to allocate any more than the bare minimum to the GPU. After selecting the GPU memory, you’ll be returned to the main menu. Select “Advanced Options” again and then select A4 “SSH”. Within the sub-menu, enable SSH. There is very little reason to keep this Pi connected to a monitor and keyboard, by enabling SSH we can remotely access the machine from anywhere on the network. Finally (and optionally) return again to the “Advanced Options” menu and select A2 “Hostname”. Here you can change your hostname from “raspberrypi” to a more fitting Minecraft name. We opted for the highly creative hostname “minecraft”, but feel free to spice it up a bit with whatever you feel like: creepertown, minecraft4life, or miner-box are all great minecraft server names. That’s it for the Raspbian configuration tab down to the bottom of the main screen and select “Finish” to reboot. After rebooting you can now SSH into your terminal, or continue working from the keyboard hooked up to your Pi (we strongly recommend switching over to SSH as it allows you to easily cut and paste the commands). If you’ve never used SSH before, check out how to use PuTTY with your Pi here. Installing Java on the Pi The Minecraft server runs on Java, so the first thing we need to do on our freshly configured Pi is install it. Log into your Pi via SSH and then, at the command prompt, enter the following command to make a directory for the installation: sudo mkdir /java/ Now we need to download the newest version of Java. At the time of this publication the newest release is the OCT 2013 update and the link/filename we use will reflect that. Please check for a more current version of the Linux ARMv6/7 Java release on the Java download page and update the link/filename accordingly when following our instructions. At the command prompt, enter the following command: sudo wget --no-check-certificate http://www.java.net/download/jdk8/archive/b111/binaries/jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz Once the download has finished successfully, enter the following command: sudo tar zxvf jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz -C /opt/ Fun fact: the /opt/ directory name scheme is a remnant of early Unix design wherein the /opt/ directory was for “optional” software installed after the main operating system; it was the /Program Files/ of the Unix world. After the file has finished extracting, enter: sudo /opt/jdk1.8.0/bin/java -version This command will return the version number of your new Java installation like so: java version "1.8.0-ea" Java(TM) SE Runtime Environment (build 1.8.0-ea-b111) Java HotSpot(TM) Client VM (build 25.0-b53, mixed mode) If you don’t see the above printout (or a variation thereof if you’re using a newer version of Java), try to extract the archive again. If you do see the readout, enter the following command to tidy up after yourself: sudo rm jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz At this point Java is installed and we’re ready to move onto installing our Minecraft server! Installing and Configuring the Minecraft Server Now that we have a foundation for our Minecraft server, it’s time to install the part that matter. We’ll be using SpigotMC a lightweight and stable Minecraft server build that works wonderfully on the Pi. First, grab a copy of the the code with the following command: sudo wget http://ci.md-5.net/job/Spigot/lastSuccessfulBuild/artifact/Spigot-Server/target/spigot.jar This link should remain stable over time, as it points directly to the most current stable release of Spigot, but if you have any issues you can always reference the SpigotMC download page here. After the download finishes successfully, enter the following command: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Note: if you’re running the command on a 256MB Pi change the 256 and 496 in the above command to 128 and 256, respectively. Your server will launch and a flurry of on-screen activity will follow. Be prepared to wait around 3-6 minutes or so for the process of setting up the server and generating the map to finish. Future startups will take much less time, around 20-30 seconds. Note: If at any point during the configuration or play process things get really weird (e.g. your new Minecraft server freaks out and starts spawning you in the Nether and killing you instantly), use the “stop” command at the command prompt to gracefully shutdown the server and let you restart and troubleshoot it. After the process has finished, head over to the computer you normally play Minecraft on, fire it up, and click on Multiplayer. You should see your server: If your world doesn’t popup immediately during the network scan, hit the Add button and manually enter the address of your Pi. Once you connect to the server, you’ll see the status change in the server status window: According to the server, we’re in game. According to the actual Minecraft app, we’re also in game but it’s the middle of the night in survival mode: Boo! Spawning in the dead of night, weaponless and without shelter is no way to start things. No worries though, we need to do some more configuration; no time to sit around and get shot at by skeletons. Besides, if you try and play it without some configuration tweaks first, you’ll likely find it quite unstable. We’re just here to confirm the server is up, running, and accepting incoming connections. Once we’ve confirmed the server is running and connectable (albeit not very playable yet), it’s time to shut down the server. Via the server console, enter the command “stop” to shut everything down. When you’re returned to the command prompt, enter the following command: sudo nano server.properties When the configuration file opens up, make the following changes (or just cut and paste our config file minus the first two lines with the name and date stamp): #Minecraft server properties #Thu Oct 17 22:53:51 UTC 2013 generator-settings= #Default is true, toggle to false allow-nether=false level-name=world enable-query=false allow-flight=false server-port=25565 level-type=DEFAULT enable-rcon=false force-gamemode=false level-seed= server-ip= max-build-height=256 spawn-npcs=true white-list=false spawn-animals=true texture-pack= snooper-enabled=true hardcore=false online-mode=true pvp=true difficulty=1 player-idle-timeout=0 gamemode=0 #Default 20; you only need to lower this if you're running #a public server and worried about loads. max-players=20 spawn-monsters=true #Default is 10, 3-5 ideal for Pi view-distance=5 generate-structures=true spawn-protection=16 motd=A Minecraft Server In the server status window, seen through your SSH connection to the pi, enter the following command to give yourself operator status on your Minecraft server (so that you can use more powerful commands in game, without always returning to the server status window). op [your minecraft nickname] At this point things are looking better but we still have a little tweaking to do before the server is really enjoyable. To that end, let’s install some plugins. The first plugin, and the one you should install above all others, is NoSpawnChunks. To install the plugin, first visit the NoSpawnChunks webpage and grab the download link for the most current version. As of this writing the current release is v0.3. Back at the command prompt (the command prompt of your Pi, not the server console–if your server is still active shut it down) enter the following commands: cd /home/pi/plugins sudo wget http://dev.bukkit.org/media/files/586/974/NoSpawnChunks.jar Next, visit the ClearLag plugin page, and grab the latest link (as of this tutorial, it’s v2.6.0). Enter the following at the command prompt: sudo wget http://dev.bukkit.org/media/files/743/213/Clearlag.jar Because the files aren’t compressed in a .ZIP or similar container, that’s all there is to it: the plugins are parked in the plugin directory. (Remember this for future plugin downloads, the file needs to be whateverplugin.jar, so if it’s compressed you need to uncompress it in the plugin directory.) Resart the server: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Be prepared for a slightly longer startup time (closer to the 3-6 minutes and much longer than the 30 seconds you just experienced) as the plugins affect the world map and need a minute to massage everything. After the spawn process finishes, type the following at the server console: plugins This lists all the plugins currently active on the server. You should see something like this: If the plugins aren’t loaded, you may need to stop and restart the server. After confirming your plugins are loaded, go ahead and join the game. You should notice significantly snappier play. In addition, you’ll get occasional messages from the plugins indicating they are active, as seen below: At this point Java is installed, the server is installed, and we’ve tweaked our settings for for the Pi.  It’s time to start building with friends!     

    Read the article

  • CodePlex Daily Summary for Monday, April 09, 2012

    CodePlex Daily Summary for Monday, April 09, 2012Popular ReleasesStyleCop+: StyleCop+ 1.8: Built over StyleCop 4.7.17.0 According to http://stylecop.codeplex.com/workitem/7156, it should be the last version which is released without new features and only for compatibility reasons. Do not forget to Unblock the file after downloading (more details) Stay tuned!Path Copy Copy: 10.1: This release addresses the following work items: 11357 11358 11359 This release is a recommended upgrade, especially for users who didn't install the 10.0.1 version.ExtAspNet: ExtAspNet v3.1.3: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-08 v3.1.3 -??Language="zh_TW"?JS???BUG(??)。 +?D...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.5.5: New Controls ChatBubble ChatBubbleTextBox OpacityToggleButton New Stuff TimeSpan languages added: RU, SK, CS Expose the physics math from TimeSpanPicker Image Stretch now on buttons Bug Fixes Layout fix so RoundToggleButton and RoundButton are exactly the same Fix for ColorPicker when set via code behind ToastPrompt bug fix with OnNavigatedTo Toast now adjusts its layout if the SIP is up Fixed some issues with Expression Blend supportHarness - Internet Explorer Automation: Harness 2.0.3: support the operation fo frameset, frame and iframe Add commands SwitchFrame GetUrl GoBack GoForward Refresh SetTimeout GetTimeout Rename commands GetActiveWindow to GetActiveBrowser SetActiveWindow to SetActiveBrowser FindWindowAll to FindBrowser NewWindow to NewBrowser GetMajorVersion to GetVersionBetter Explorer: Better Explorer 2.0.0.861 Alpha: - fixed new folder button operation not work well in some situations - removed some unnecessary code like subclassing that is not needed anymore - Added option to make Better Exlorer default (at least for WIN+E operations) - Added option to enable file operation replacements (like Terracopy) to work with Better Explorer - Added some basic usability to "Share" button - Other fixesText Designer Outline Text: Version 2 Preview 2: Added Fake 3D demos for C++ MFC, C# Winform and C# WPFLightFarsiDictionary - ??????? ??? ?????/???????: LightFarsiDictionary - v1: LightFarsiDictionary - v1WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.3: Version: 2.5.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete [O] WAF: Mark the StringBuilderExtensions class as obsolete because the AppendInNewLine method can be replaced with string.Jo...GeoMedia PostGIS data server: PostGIS GDO 1.0.1.2: This is a new version of GeoMeda PostGIS data server which supports user rights. It means that only those feature classes, which the current user has rights to select, are visible in GeoMedia. Issues fixed in this release Fixed problem with renaming and deleting feature classes - IMPORTANT! - the gfeatures view must be recreated so that this issue is completely fixed. The attached script "GFeaturesView2.sql" can be used to accomplish this task. Another way is to drop and recreate the metadat...SkyDrive Connector for SharePoint: SkyDrive Connector for SharePoint: Fixed a few bugs pertaining to live authentication Removed dependency on Shared Documents Removed CallBack web part propertyClosedXML - The easy way to OpenXML: ClosedXML 0.65.2: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks) New on v0.65.1 Fixed issue when loading conditional formatting with default values for icon sets New on v0.65.2 Fixed issue loading conditional formatting Improved inserts performanceLiberty: v3.2.0.0 Release 4th April 2012: Change Log-Added -Halo 3 support (invincibility, ammo editing) -Halo 3: ODST support (invincibility, ammo editing) -The file transfer page now shows its progress in the Windows 7 taskbar -"About this build" settings page -Reach Change what an object is carrying -Reach Change which node a carried object is attached to -Reach Object node viewer and exporter -Reach Change which weapons you are carrying from the object editor -Reach Edit the weapon controller of vehicles and turrets -An error dia...MSBuild Extension Pack: April 2012: Release Blog Post The MSBuild Extension Pack April 2012 release provides a collection of over 435 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUID’...DotNetNuke® Community Edition CMS: 06.01.05: Major Highlights Fixed issue that stopped users from creating vocabularies when the portal ID was not zero Fixed issue that caused modules configured to be displayed on all pages to be added to the wrong container in new pages Fixed page quota restriction issue in the Ribbon Bar Removed restriction that would not allow users to use a dash in page names. Now users can create pages with names like "site-map" Fixed issue that was causing the wrong container to be loaded in modules wh...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.3.1: One Click Install from NuGet Changes to Version 2.1.3.11. [assembly: AllowPartiallyTrustedCallers] has been added back into the AssemblyInfo.cs file to prevent failures with other assemblies in Medium trust environments. 2. The Lite data embedded into the assembly has been updated to include devices from December 2011. The 42 new RingMark properties will return Unknown if RingMark data is not available. Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla...MVC Controls Toolkit: Mvc Controls Toolkit 2.0.0: Added Support for Mvc4 beta and WebApi The SafeqQuery and HttpSafeQuery IQueryable implementations that works as wrappers aroung any IQueryable to protect it from unwished queries. "Client Side" pager specialized in paging javascript data coming either from a remote data source, or from local data. LinQ like fluent javascript api to build queries either against remote data sources, or against local javascript data, with exactly the same interface. There are 3 different query objects exp...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.50: Highlight features & improvements: • Significant performance optimization. • Allow store owners to create several shipments per order. Added a new shipping status: “Partially shipped”. • Pre-order support added. Enables your customers to place a Pre-Order and pay for the item in advance. Displays “Pre-order” button instead of “Buy Now” on the appropriate pages. Makes it possible for customer to buy available goods and Pre-Order items during one session. It can be managed on a product variant ...WiX Toolset: WiX v3.6 RC0: WiX v3.6 RC0 (3.6.2803.0) provides support for VS11 and a more stable Burn engine. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/4/3/WiX-v3.6-Release-Candidate-Zero-availableSageFrame: SageFrame 2.0: Sageframe is an open source ASP.NET web development framework developed using ASP.NET 3.5 with service pack 1 (sp1) technology. It is designed specifically to help developers build dynamic website by providing core functionality common to most web applications.New ProjectsAprilSpring: Common Framework by pansq and huangghASP.Net MVC Dynamic JS/CSS Script Compression Framework: ASP.Net MVC JS and CSS dynamic script compression and composite script library. Also resolves virtual content urls in CSS files.BANews: ???????????????????,??SQL+Server2008,C#,asp.net????,??????Jquery?CSS+DIV??。??????,?????????????,????????,??????,??。Blazonisation: If have emblem of some state, but don't know anything about it, you can use this tool to resolve your problem. Controle de Campenato: Outro Projeto com alunos da Infnet onde desenvolveremos um controle de campeonato...Controle Financeiro Pessoal Web: Projeto desenvolvido com os alunos do curso Oficial da Microsoft na INFNET. Consiste em controlar as despesas e receitas durante o mes e realizar uma projeção sobre os próximos meses. Bons Estudos, Prof. Carlos PedroCoPro - The .NET Content Provisioning Framework: CoPro makes content management, localization and globalization for ASP.NET Developers easier. It provides a framework which takes care of all management and provisioning activities and the developer only has to create the content and place it on the site. It is developed in C#.cppERF: Class ERF function. Test on VC++ 2008 express, and cygwin.Dev Studio 17 Web-Based FTP Client: Dev Studio 17 Web-Based FTP Client is a web based ftp application built using asp.net, c#.EasyCRMNet: EasyCRMNetEnterpriseLibrary Azure Backing Store: Most developers face difficulties in migrating the application, which is using caching block of Enterprise library, to azure due to unavailability of app fabric cache backing store for Enterprise Library. This library provides an app fabric backing store for enterprise library.Event Aggregator: One of the key aspects in application design is managing dependencies between modules. Good architecture might begin to suffer from strong coupling as long as the project grows. All this affects further development by making it harder to change modules (a change in one module usually forces a ripple effect of changes in other modules), also strongly coupled modules are harder to reuse and test. Message coupling is a good choice in developing loosely coupled architecture. This is the looses...FarhadYazdan-Panah Personal Repsitory: A place for backupFluent Assertions MVC: MVC Extensions for Fluent Assertions library. Source Code is on GitHub: https://github.com/CaseyBurns/FluentAssertions.MVCGboot: GtalkBotGetPicture: GetPicture is a project that get some pictures at a website.It is not good,just use to study.gmfbridge: clone for gmfbridge http://www.gdcl.co.uk/gmfbridge/GoodStore: ??????????,??B/S??,asp.net?sqlserver???,???????,?????????。。。JSLint for Resharper: Adds highlighting of JSLint validation errors to Resharper. kiemtien: Alpha state website, very unstable.nuIDE Kinect Prototype (Experimental): Prototype for Kinect-integrated Visual Studio extension concept. (Not yet alpha - throw away code)PB-LOG-Quick and easy XML logger for .NET: PB-LOG is a quick and easy XML logger for all kink of .NET application. You can insert log associated to a user. There're 4 kind of log: Error, Info, Warning, Event. It's developed in C# 4. There's also a PB-LOG for WP7.PB-LOG-WP7-Quick and easy XML logger for Windows Phone 7: PB-LOG is a quick and easy XML logger for all Windows Phone 7. You can insert log associated to a user. There're 4 kind of log: Error, Info, Warning, Event. It's developed in C# 4.Persian_Calendar_for_Microsoft_Office: "Persian_Calendar_for_Microsoft_Office" makes it easier for Office user group to do their tasks according to time-sheets in IRAN. You'll no longer have to depend on other forms of calendar. It would be developed in any programming language esp C#.Projeto Agenda FPU: Projeto AgendaPS Framework: PS.Framework is an Application Framework to simplify the developement of .NET Applications. It's containing many helpfull classes and functions e.g. an RemotingInterface to simplify the use of .NET Remoting.Quizzer123: Awesome application for quizzes and tests!SharePoint 2010 Automatic Content Database Selection: This solution for SharePoint 2010 automatically selects the smallest content database when you create a new site collection in a web application.SIToFb2: samizdat to bf2 converterSmart Logger: SmartLogger is a web service that can consume exceptions thrown from client applications. The exceptions can be categorized based on severity, applications and exception time stamp. It also includes a search feature to drill down the exceptions log based on search criteria. The release 1.0 will include logging exceptions with search feature. Adding new applications and user right management will follow.Sync Sync: [image:365572] A SharePoint ETL and migration toolTestCuttingStockUsingSolverFoundation: using solver foundation from msft to solve civ e 606 group assignment 2, the cutting stock problem.Vote: ?,???????????????!??,????,?????!WebFurniture: Web-?????????? ??? ?????????????? ????????? ???????yammyy2: yammyy2 zhongjh: ?????????,???????。

    Read the article

  • DevConnections Session Slides, Samples and Links

    - by Rick Strahl
    Finally coming up for air this week, after catching up with being on the road for the better part of three weeks. Here are my slides, samples and links for my four DevConnections Session two weeks ago in Vegas. I ended up doing one extra un-prepared for session on WebAPI and AJAX, as some of the speakers were either delayed or unable to make it at all to Vegas due to Sandy's mayhem. It was pretty hectic in the speaker room as Erik (our event coordinator extrodinaire) was scrambling to fill session slots with speakers :-). Surprisingly it didn't feel like the storm affected attendance drastically though, but I guess it's hard to tell without actual numbers. The conference was a lot of fun - it's been a while since I've been speaking at one of these larger conferences. I'd been taking a hiatus, and I forgot how much I enjoy actually giving talks. Preparing - well not  quite so much, especially since I ended up essentially preparing or completely rewriting for all three of these talks and I was stressing out a bit as I was sick the week before the conference and didn't get as much time to prepare as I wanted to. But - as always seems to be the case - it all worked out, but I guess those that attended have to be the judge of that… It was great to catch up with my speaker friends as well - man I feel out of touch. I got to spend a bunch of time with Dan Wahlin, Ward Bell, Julie Lerman and for about 10 minutes even got to catch up with the ever so busy Michele Bustamante. Lots of great technical discussions including a fun and heated REST controversy with Ward and Howard Dierking. There were also a number of great discussions with attendees, describing how they're using the technologies touched in my talks in live applications. I got some great ideas from some of these and I wish there would have been more opportunities for these kinds of discussions. One thing I miss at these Vegas events though is some sort of coherent event where attendees and speakers get to mingle. These Vegas conferences are just like "go to sessions, then go out and PARTY on the town" - it's Vegas after all! But I think that it's always nice to have at least one evening event where everybody gets to hang out together and trade stories and geek talk. Overall there didn't seem to be much opportunity for that beyond lunch or the small and short exhibit hall events which it seemed not many people actually went to. Anyways, a good time was had. I hope those of you that came to my sessions learned something useful. There were lots of great questions and discussions after the sessions - always appreciate hearing the real life scenarios that people deal with in relation to the abstracted scenarios in sessions. Here are the Session abstracts, a few comments and the links for downloading slides and  samples. It's not quite like being there, but I hope this stuff turns out to be useful to some of you. I'll be following up a couple of these sessions with white papers in the following weeks. Enjoy. ASP.NET Architecture: How ASP.NET Works at the Low Level Abstract:Interested in how ASP.NET works at a low level? ASP.NET is extremely powerful and flexible technology, but it's easy to forget about the core framework that underlies the higher level technologies like ASP.NET MVC, WebForms, WebPages, Web Services that we deal with on a day to day basis. The ASP.NET core drives all the higher level handlers and frameworks layered on top of it and with the core power comes some complexity in the form of a very rich object model that controls the flow of a request through the ASP.NET pipeline from Windows HTTP services down to the application level. To take full advantage of it, it helps to understand the underlying architecture and model. This session discusses the architecture of ASP.NET along with a number of useful tidbits that you can use for building and debugging your ASP.NET applications more efficiently. We look at overall architecture, how requests flow from the IIS (7 and later) Web Server to the ASP.NET runtime into HTTP handlers, modules and filters and finally into high-level handlers like MVC, Web Forms or Web API. Focus of this session is on the low-level aspects on the ASP.NET runtime, with examples that demonstrate the bootstrapping of ASP.NET, threading models, how Application Domains are used, startup bootstrapping, how configuration files are applied and how all of this relates to the applications you write either using low-level tools like HTTP handlers and modules or high-level pages or services sitting at the top of the ASP.NET runtime processing chain. Comments:I was surprised to see so many people show up for this session - especially since it was the last session on the last day and a short 1 hour session to boot. The room was packed and it was to see so many people interested the abstracts of architecture of ASP.NET beyond the immediate high level application needs. Lots of great questions in this talk as well - I only wish this session would have been the full hour 15 minutes as we just a little short of getting through the main material (didn't make it to Filters and Error handling). I haven't done this session in a long time and I had to pretty much re-figure all the system internals having to do with the ASP.NET bootstrapping in light for the changes that came with IIS 7 and later. The last time I did this talk was with IIS6, I guess it's been a while. I love doing this session, mainly because in my mind the core of ASP.NET overall is so cleanly designed to provide maximum flexibility without compromising performance that has clearly stood the test of time in the 10 years or so that .NET has been around. While there are a lot of moving parts, the technology is easy to manage once you understand the core components and the core model hasn't changed much even while the underlying architecture that drives has been almost completely revamped especially with the introduction of IIS 7 and later. Download Samples and Slides   Introduction to using jQuery with ASP.NET Abstract:In this session you'll learn how to take advantage of jQuery in your ASP.NET applications. Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors for document element selection, manipulating these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The second half of the session then delves into jQuery's AJAX features and several different ways how you can interact with ASP.NET on the server. You'll see examples of using ASP.NET MVC for serving HTML and JSON AJAX content, as well as using the new ASP.NET Web API to serve JSON and hypermedia content. You'll also see examples of client side templating/databinding with Handlebars and Knockout. Comments:This session was in a monster of a room and to my surprise it was nearly packed, given that this was a 100 level session. I can see that it's a good idea to continue to do intro sessions to jQuery as there appeared to be quite a number of folks who had not worked much with jQuery yet and who most likely could greatly benefit from using it. Seemed seemed to me the session got more than a few people excited to going if they hadn't yet :-).  Anyway I just love doing this session because it's mostly live coding and highly interactive - not many sessions that I can build things up from scratch and iterate on in an hour. jQuery makes that easy though. Resources: Slides and Code Samples Introduction to jQuery White Paper Introduction to ASP.NET Web API   Hosting the Razor Scripting Engine in Your Own Applications Abstract:The Razor Engine used in ASP.NET MVC and ASP.NET Web Pages is a free-standing scripting engine that can be disassociated from these Web-specific implementations and can be used in your own applications. Razor allows for a powerful mix of code and text rendering that makes it a wonderful tool for any sort of text generation, from creating HTML output in non-Web applications, to rendering mail merge-like functionality, to code generation for developer tools and even as a plug-in scripting engine. In this session, we'll look at the components that make up the Razor engine and how you can bootstrap it in your own applications to hook up templating. You'll find out how to create custom templates and manage Razor requests that can be pre-compiled, detecting page changes and act in ways similar to a full runtime. We look at ways that you can pass data into the engine and retrieve both the rendered output as well as result values in a package that makes it easy to plug Razor into your own applications. Comments:That this session was picked was a bit of a surprise to me, since it's a bit of a niche topic. Even more of a surprise was that during the session quite a few people who attended had actually used Razor externally and were there to find out more about how the process works and how to extend it. In the session I talk a bit about a custom Razor hosting implementation (Westwind.RazorHosting) and drilled into the various components required to build a custom Razor Hosting engine and a runtime around it. This sessions was a bit of a chore to prepare for as there are lots of technical implementation details that needed to be dealt with and squeezing that into an hour 15 is a bit tight (and that aren't addressed even by some of the wrapper libraries that exist). Found out though that there's quite a bit of interest in using a templating engine outside of web applications, or often side by side with the HTML output generated by frameworks like MVC or WebForms. An extra fun part of this session was that this was my first session and when I went to set up I realized I forgot my mini-DVI to VGA adapter cable to plug into the projector in my room - 6 minutes before the session was about to start. So I ended up sprinting the half a mile + back to my room - and back at a full sprint. I managed to be back only a couple of minutes late, but when I started I was out of breath for the first 10 minutes or so, while trying to talk. Musta sounded a bit funny as I was trying to not gasp too much :-) Resources: Slides and Code Samples Westwind.RazorHosting GitHub Project Original RazorHosting Blog Post   Introduction to ASP.NET Web API for AJAX Applications Abstract:WebAPI provides a new framework for creating REST based APIs, but it can also act as a backend to typical AJAX operations. This session covers the core features of Web API as it relates to typical AJAX application development. We’ll cover content-negotiation, routing and a variety of output generation options as well as managing data updates from the client in the context of a small Single Page Application style Web app. Finally we’ll look at some of the extensibility features in WebAPI to customize and extend Web API in a number and useful useful ways. Comments:This session was a fill in for session slots not filled due MIA speakers stranded by Sandy. I had samples from my previous Web API article so decided to go ahead and put together a session from it. Given that I spent only a couple of hours preparing and putting slides together I was glad it turned out as it did - kind of just ran itself by way of the examples I guess as well as nice audience interactions and questions. Lots of interest - and also some confusion about when Web API makes sense. Both this session and the jQuery session ended up getting a ton of questions about when to use Web API vs. MVC, whether it would make sense to switch to Web API for all AJAX backend work etc. In my opinion there's no need to jump to Web API for existing applications that already have a good AJAX foundation. Web API is awesome for real externally consumed APIs and clearly defined application AJAX APIs. For typical application level AJAX calls, it's still a good idea, but ASP.NET MVC can serve most if not all of that functionality just as well. There's no need to abandon MVC (or even ASP.NET AJAX or third party AJAX backends) just to move to Web API. For new projects Web API probably makes good sense for isolation of AJAX calls, but it really depends on how the application is set up. In some cases sharing business logic between the HTML and AJAX interfaces with a single MVC API can be cleaner than creating two completely separate code paths to serve essentially the same business logic. Resources: Slides and Code Samples Sample Code on GitHub Introduction to ASP.NET Web API White Paper© Rick Strahl, West Wind Technologies, 2005-2012Posted in Conferences  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Friday, March 18, 2011

    CodePlex Daily Summary for Friday, March 18, 2011Popular ReleasesCraig's Utility Library: Craig's Utility Library 2.1: This update contains the following functionality additions: Added Min and Max functions to MathHelper Added code for handling comments to BlogML code. Added WMI based file search ability (at present only searches based on file extension). Added CellularTexture class Added FaultFormation class Added PerlinNoise class Added Akismet helper classes Added Gravatar image link generator Added PipeDelimited class Added FaultFormation class Added FluvialErosion functions to Image c...DirectQ: Release 1.8.7 (RC3): Release Candidate 3 of 1.8.7 fixing many bugs and adding some new functionality.Catel - WPF, Silverlight and WP7 MVVM library: 1.3: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.3 =========== Release date: ============= 2011/03/18 Added/fixed: ============ (+) Added BindingHelper class to evaluate binding values manually (+) UserControl<TViewModel> now supports master-detail views where the detail view would have a nested UserControl<TViewModel> and no parent ...CBM-Command: Version 2.0 - 2011-03-17 - Final Release: This is the final release of CBM-Command Version 2.0. This version is intended to replace all prior versions you may have downloaded. Please see release notes for prior versions to get comprehensive list of changes. New features since RC1: - (C64) Added double-buffering when displaying the directory in a panel. This eliminates the flickering that users were experiencing when scrolling through long directories. Changes since RC1: - (All Machines) Changed the algorithm for displaying the d...Phalanger - The PHP Language Compiler for the .NET Framework: 2.1 (March 2011) for .NET 4.0: Introducing release of Phalanger 2.1 for .NET 4.0. This release brings big performance boost about 20% in most of the operations. This improvement can be expected also in an overall performance in many PHP applications, e.g. Wordpress. It is the first release that targets .NET Framework 4.0 which allows developers to move the project forward. To migrate your old Phalanger applications from Phalanger 2.0 to 2.1 please follow Migration to 2.1. Installation package also includes basic version o...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.164: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds new layout options for groups, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file in Wi...Leage of Legends Masteries Tool: LoLMasterSave_v1.6.1.274: -Addresses resent LoL update that interfered with the way MasterSave sets / reads masteries - Removed Shift windows since some people experiencing issues If your interested in this function i can provide it as small separate tool.ASP.NET Comet Ajax Library (Reverse Ajax - Server Push): ASP.NET Server Push Samples: This package contains 14 sample projects.LogExpert: 1.4 build 4092: TabControl: Tooltip on dropdown list shows full path names now New menu item "Lock instance" in Options menu. Only available when "Allow only one instance" is disabled in the settings. "Lock instance" will temporary enable the single instance mode. The locked instance will receive all new launched files Some NullPtrExceptions fixed (e.g. in the settings dialog) Note: The debug build is identical to the release build. But the debug version writes a log file. It also contains line numbers ...Facebook C# SDK: 5.0.6 (BETA): This is seventh BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. New in this release: Version 5.0.6 is almost completely backward compatible with 4.2.1 and 5.0.3 (BETA) Bug fixes and helpers to simplify many common scenarios For more information about this release see the following blog posts: F...DotNetNuke® Community Edition: 06.00.00 CTP: CTP 1 (Build 155) is firmly focused around our conversion to C#. As many people have noted, this is a significant change to the platform and affects all areas of the product. This is one of the driving factors in why we felt it was important to get this release into your hands as soon as possible. We have already done quite a bit of testing on this feature internally and have fixed a number of issues in this area. We also recognize that there are probably still some more bugs to be found ...Kooboo CMS: Kooboo 3.0 RC: Bug fixes Inline editing toolbar positioning for websites with complicate CSS. Inline editing is turned on by default now for the samplesite template. MongoDB version content query for multiple filters. . Add a new 404 page to guide users to login and create first website. Naming validation for page name and datarule name. Files in this download kooboo_CMS.zip: The Kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB ...SQL Monitor - tracking sql server activities: SQL Monitor 3.2: 1. introduce sql color syntax highlighting with http://www.codeproject.com/KB/edit/FastColoredTextBox_.aspxUmbraco CMS: Umbraco 4.7.0: Service release fixing 50+ issues! Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're available from: Introduction for webmasters: http://umbraco.tv/help-and-support/video-tutorials/getting-started Understand the Umbraco concepts: http://umbraco.tv/help-and-support...ProDinner - ASP.NET MVC EF4 Code First DDD jQuery Sample App: first release: ProDinner is an ASP.NET MVC sample application, it uses DDD, EF4 Code First for Data Access, jQuery and MvcProjectAwesome for Web UI, it has Multi-language User Interface Features: CRUD and search operations for entities Multi-Language User Interface upload and crop Images (make thumbnail) for meals pagination using "more results" button very rich and responsive UI (using Mvc Project Awesome) Multiple UI themes (using jQuery UI themes)BEPUphysics: BEPUphysics v0.15.1: Latest binary release. Version HistoryLiveChat Starter Kit: LCSK v1.1: This release contains couple of new features and bug fixes including: Features: Send chat transcript via email Operator can now invite visitor to chat (pro-active chat request) Bug Fixes: Operator management (Save and Delete) bug fixes Operator Console chat small fixesIronRuby: 1.1.3: IronRuby 1.1.3 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. The main purpose of this release is to sync with IronPython 2.7 release, i.e. to keep the Dynamic Language Runtime that both these languages build on top shareable. This release also fixes a few bugs: 5763 Use...SQL Server PowerShell Extensions: 2.3.2.1 Production: Release 2.3.2.1 implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 13 modules with 163 advanced functions, 2 cmdlets and 7 scripts for working with ADO.NET, SMO, Agent, RMO, SSIS, SQL script files, PBM, Performance Counters, SQLProfiler, Oracle and MySQL and using Powershell ISE as a SQL and Oracle query tool. In addition optional backend databases and SQL Server Reporting Services 2008 reports are provided with SQLServer and PBM modules. See readme file for details.IronPython: 2.7: On behalf of the IronPython team, I'm very pleased to announce the release of IronPython 2.7. This release contains all of the language features of Python 2.7, as well as several previously missing modules and numerous bug fixes. IronPython 2.7 also includes built-in Visual Studio support through IronPython Tools for Visual Studio. IronPython 2.7 requires .NET 4.0 or Silverlight 4. To download IronPython 2.7, visit http://ironpython.codeplex.com/releases/view/54498. Any bugs should be report...New Projects4711 Widget: Time Widget For 4711Baidu map api: Baidu map apiBookmooch.NET: A .NET wrapper for the Bookmooch.com API.Crash Cite - Electronic Crash Reporting Software: While we were in the electronic citation business - we wrote Indiana's eCWS - we decided to write our own crash reporting software. We exited from that biz because it's too politically charged. Here for you to use is an almost complete electronic crash reporting solution. Enjoy.Curso apb: Proyecto para gestionar un curso academico basado en apb. Ejemplo para aprendizaje personalEle: simple database changes log web applicationEmptor by Intrigue Deviation: An idea for collaboration and relations between customers and businesses.FixEd: FixEd is a text editor for working with text files that require enforcement of column widths and line lengths. Fixed-Width files...Typical from mainframe datasets or csv files. It provides a plugin parser system, so that files may be exported to and imported from any format.Free inventory: InventoryFurcadia Heimdall Tester: An application that helps Furcadia technicians test the integrity of the game server. It checks for availability of each heimdall, its connectivity to the rest of the system (horton/tribble) and how often it receives a user compared to the rest of them.GestOre: Software per gestire le ore di lavoro. Creazione di report mensili.Grauers Google Chart WebPart: SharePoint Foundation Google WebPart Chart. Create a custom list and connect Grauers Google Chart with the list. The Chart is interactive. Created by Ola GrauershOcr2Pdf.NET: hOcr2Pdf.NET is program/library to convert .hOcr files into a searchable pdf. It is currently being written and tested again the .hocr files products by the Tesseract ocr engine.Hyper-V Guest Console: This application provides a console for manage VMs use within a Hyper-V guest. With HVGC an Hyper-V guest can: - Start VMs - Shutdow VMs - Stop VMs - Monitoring VMs HVGC is develop in Visual Studio 2008 using VB.NET.JUpload.Net: JUpload.Net is an Asp Net control which encapsulates the popular Jupload java applet (http://jupload.sourceforge.net). myMvcBlog: my mvc blogNGI.Framework: NGI.FrameworkOnlineLicense: OnlineLicense is a PHP, mySQL and C# project. PHP and mySQL runing at Web Server side to store the online user information, and return the proper license for desktop applications which write by C#. SharePoint 2010 Managed Metadata Import Tool: Command-line SharePoint 2010 Managed Metadata import tool. Uses (almost) same CSV format as the Microsoft importer and supports static term GUIDs. Unlike the MS tool, this importer does not cause data corruption issues when taxonomy terms are used with Content Organizer rules.Sphere Layout Library: ItemsControl library targetting WPF, Silverlight for desktop and Silverlight for WP7, allowing to layout things on a 3d Sphere.TestProject for C#: Test project to test a Team Foundation Server.TfsUtils: Simple Project to store some TFS API utility code.This Is My New Project: FasdfTidy UI: Tidy UI is a "write less - do more" framework for .NET focusing on both client components and code for easy data access. The main goal of the framework is to enable ASP.NET developers to focus more on functionality and less on the technical aspects of the development process.TönnenKlapps: XNA game where you try to smash a 3D spinning barrel using the correct coloured buttons and the right timing.yihuaisha_Window: Glest research

    Read the article

  • CodePlex Daily Summary for Saturday, January 29, 2011

    CodePlex Daily Summary for Saturday, January 29, 2011Popular ReleasesRawr: Rawr 4.0.17 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 and on the Version Notes page: http://rawr.codeplex.com/wikipage?title=VersionNotes As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you...VivoSocial: VivoSocial 7.4.2: Version 7.4.2 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.4.2 release and see if they persist. If you have any questions about this release, please post them in our Support forums. If you are experiencing a bug or would like to request a new feature, please submit it to our issue tracker. Web Controls * Updated Business Objects and added a new SQL Data Provider File. Groups * Fixed a security issue whe...PHP Manager for IIS: PHP Manager 1.1.1 for IIS 7: This is a minor release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus several bug fixes (see change list for more details). Also, this release includes Russian language support. SHA1 codes for the downloads are: PHPManagerForIIS-1.1.0-x86.msi - 6570B4A8AC8B5B776171C2BA0572C190F0900DE2 PHPManagerForIIS-1.1.0-x64.msi - 12EDE004EFEE57282EF11A8BAD1DC1ADFD66A654BloodSim: WControls.dll update: Priority Update It's just come to my attention that the latest version of WControls.dll was not included in the 1.4 release and as a result, BloodSim has been unuseable. Please download WControls.dll from here, and this will rectify the issue.VFPX: VFP2C32 2.0.0.8 Release Candidate: This release includes several bugfixes, new functions and finally a CHM help file for the complete library.DB>doc for Microsoft SQL Server: 1.0.0.0: Initial release Supported output HTML WikiPlex markup Raw XML Supported objects Tables Primary Keys Foreign Keys ViewsmojoPortal: 2.3.6.1: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2361-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Office Web.UI: Alpha preview: This is the first alpha release. Very exciting moment... This download is just the demo application : "Contoso backoffice Web App". No source included for the moment, just the app, for testing purposes and for feedbacks !! This package includes a set of official MS Office icons (143 png 16x16 and 132 png 32x32) to really make a great app ! Please rate and give feedback ThanksParallel Programming with Microsoft Visual C++: Drop 6 - Chapters 4 and 5: This is Drop 6. It includes: Drafts of the Preface, Introduction, Chapters 2-7, Appendix B & C and the glossary Sample code for chapters 2-7 and Appendix A & B. The new material we'd like feedback on is: Chapter 4 - Parallel Aggregation Chapter 5 - Futures The source code requires Visual Studio 2010 in order to run. There is a known bug in the A-Dash sample when the user attempts to cancel a parallel calculation. We are working to fix this.Catel - WPF and Silverlight MVVM library: 1.1: (+) Styles can now be changed dynamically, see Examples application for a how-to (+) ViewModelBase class now have a constructor that allows services injection (+) ViewModelBase services can now be configured by IoC (via Microsoft.Unity) (+) All ViewModelBase services now have a unit test implementation (+) Added IProcessService to run processes from a viewmodel with directly using the process class (which makes it easier to unit test view models) (*) If the HasErrors property of DataObjec...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.160: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release improves NodeXL's Twitter and Pajek features. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file in Windows Explorer and select "Extract All." Close Ex...Kooboo CMS: Kooboo CMS 3.0 CTP: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL, RavenDB and SQLCE. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder to enable...UOB & ME: UOB ME 2.6: UOB ME 2.6????: ???? V1.0: ???? V1.0 ??Password Generator: 2.2: Parallel password generation Password strength calculation ( Same method used by Microsoft here : https://www.microsoft.com/protect/fraud/passwords/checker.aspx ) Minor code refactoringVisual Studio 2010 Architecture Tooling Guidance: Spanish - Architecture Guidance: Francisco Fagas http://geeks.ms/blogs/ffagas, Microsoft Most Valuable Professional (MVP), localized the Visual Studio 2010 Quick Reference Guidance for the Spanish communities, based on http://vsarchitectureguide.codeplex.com/releases/view/47828. Release Notes The guidance is available in a xps-only (default) or complete package. The complete package contains the files in xps, pdf and Office 2007 formats. 2011-01-24 Publish version 1.0 of the Spanish localized bits.ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager the html generation has been optimized, the html page size is much smaller nowFacebook Graph Toolkit: Facebook Graph Toolkit 0.6: new Facebook Graph objects: Application, Page, Post, Comment Improved Intellisense documentation new Graph Api connections: albums, photos, posts, feed, home, friends JSON Toolkit upgraded to version 0.9 (beta release) with bug fixes and new features bug fixed: error when handling empty JSON arrays bug fixed: error when handling JSON array with square or large brackets in the message bug fixed: error when handling JSON obejcts with double quotation in the message bug fixed: erro...Microsoft All-In-One Code Framework: Visual Studio 2008 Code Samples 2011-01-23: Code samples for Visual Studio 2008MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (3): Instructions for installation: http://www.galasoft.ch/mvvm/installing/manually/ Includes the hotfix templates for Windows Phone 7 development. This is only relevant if you didn't already install the hotfix described at http://blog.galasoft.ch/archive/2010/07/22/mvvm-light-hotfix-for-windows-phone-7-developer-tools-beta.aspx.New ProjectsAsp.Net MvcRoute Debugger Visualizer: Asp.Net MvcRoute Debugger Visualizer is an extension for Visual Studio 2010.It displays the matched route data and datatokens of all defined routes in your mvc applicationAutoLaunch for Windows Embedded Compact (CE): AutoLaunch component for Windows Embedded CE 6.0 and Windows Embedded Compact 7BicubicResizeSilverlight: A client-side Silverlight library for performing a bicubic resize. Cloud Monitoring Studio: Cloud Monitoring Studio provides real-time performance monitoring mechanism for Azure deployed applications/Azure roles with the help of easy-to-understand graphs. Users can configure required performance counters for individual Azure roles through easy-to-use & intuitive GUI.CoreCon for Windows Embedded Compact (CE): CoreCon component for Windows Embedded CE 6.0 and Windows Embedded Compact 7, to simplify the tasks needed to include CoreCon files to the OS image.DokanDiscUtils Bridge: This a a bridge for the Dokan library and the DiscUtils library that allows Any partition/image that discutils can open to be mounted using dokanEverything About My Share: Everything About My ShareExtraordinary Ideas: This project contains some Extra oradinary solutions for difficult software problems.. Article's are in Turkish Language tahircakmak.blogspot.comIE9 Helper: The IE9 Helper is an ASP.NET Helper to makes adding IE9 features simple.ITSolutions: .NET Sandbox SolutionsKitty - Async Server: Kitty is an Async Server with C# and Java versions. It builds on Parallelism to deliver a Internet Scale Server StarterKit.mediainfocollector: Retrieves info from mediafilesMini - Async WebSocket Server: Mini is an Async WebSocket Server with versions in C# and Java. It builds on Parallelism and Kitty to deliver an Internet Scale WebSocket Server StarterKit. MVC N-tier EMR Sample Application: This sample uses EMR to showcase a MVC N-Tier application. It uses WCF to separate the presentation from the DB/Backend.MyPhotoGallery: MyPhotoGallery is a photo gallery developed in ASP.Net MVC.NuGet: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development.OneCode CodeBox: OneCode CodeBoxonecode toolkit: toolsOrchard SSL: Orchard Secured Sockets Layer (SSL) adds new features to force the usage of SSL in sections like authentication pages or the dashboard.OrchardCMS - Lokalizacija: OrchardCMS - Lokalizacija je projekat lokalizacije Orchard CMS-a na srpski, hrvatski, srpskohrvatski, hrvatskosrpski i sve ostale jezike sa podrucija ex-YU. Pokušacemo da iskorisitmo slicnosti u jezicima kako bi jedni drugima pomogli i olakšali lokalizaciju Orchard-a. Pyxis Install Maker: This utility creates single file installs for Pyxis 2 applications.SaleCard: TestSharePoint Social Networking: A Collection of solutions aimed at adding social network to your SharePoint 2010 site. The collection will include web parts, ribbon extensions and other type of solutions that will add or enhance the social networking capabilities of SharePoint. The solution is written in c#.SmartScreenTerminator: Pissed off by the stupid "smart screen" asking you to "remember to protect your password" every time you click someone else's blog link in your Windows Live homepage? Use this three line appication to allow your IE to automatically navigate to the destination page.StreetScene: StreetScene is a sample project showing how to use Microsoft technologies such as ASP.NET, Azure and Silverlight to build rich applications.svmnetx: testtest_project: test projectTweet-Links: Tweet-Links is designed for ultra-fast sending a twitter selected text and files and images.WCMF - Windows Content Management Framework: WCMF is a framework that allows one to build CMS appllications that target the Windows platform (whereas a typical CMS system is usually web based). Used technologies are MEF, WCF, WPF (including Silverlight) and Caliburn.Micro.WebNoteAOP: A sample project for aspect oriented programming with PostSharp. The main purpose is a short introduction. All aspects are made in a very simple way. They should be useful for simple applications. Please keep in mind, that complex problems often require a complex solution, too.

    Read the article

  • Migration and deployement problems JBoss 4.2.2.GA to JBoss 6.0.0.M2

    - by krzyamaneko
    Hi, I'm trying to migrate an application running on JBoss 4.2.2.GA to JBoss 6.0.0.M2 I give you some log to explain my problem : boot.log : 2010-03-16 09:59:29,406 ERROR [org.jboss.system.server.profileservice.ProfileServiceBootstrap] (Thread-2) Failed to load profile: Summary of incomplete deployments (SEE PREVIOUS ERRORS FOR DETAILS): DEPLOYMENTS IN ERROR: Deployment "vfszip:/G:/jboss-6.0.0.M2/server/default/deploy/serveur.jar/" is in error due to the following reason(s): java.lang.IllegalStateException: Factory$org.jboss.aspects.remoting.InvokeRemoteInterceptor is already installed. server.log : 11:58:33,156 ERROR [AbstractKernelController] Error installing to PostClassLoader: name=vfszip:/G:/jboss-6.0.0.M2/server/default/deploy/serveur.jar/ state=ClassLoader mode=Manual requiredState=PostClassLoader: org.jboss.deployers.spi.DeploymentException: Cannot process metadata at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:196) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:95) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:179) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1660) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1378) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1431) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1319) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:378) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2029) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1050) at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1289) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1213) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1107) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:918) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:633) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:898) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:677) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:403) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:378) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2029) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1050) at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1289) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1213) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1107) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:873) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:620) at org.jboss.system.server.profileservice.repository.AbstractProfileService.registerProfile(AbstractProfileService.java:308) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:259) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:100) at org.jboss.bootstrap.impl.base.server.AbstractServer.startBootstraps(AbstractServer.java:860) at org.jboss.bootstrap.impl.base.server.AbstractServer$StartServerTask.run(AbstractServer.java:441) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassNotFoundException: common.Main from BaseClassLoader@e1c3a7{vfszip:/G:/jboss-6.0.0.M2/server/default/deploy/serveur.jar/} at org.jboss.classloader.spi.base.BaseClassLoader.loadClass(BaseClassLoader.java:498) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at org.jboss.deployment.OptAnnotationMetaDataDeployer.processJBossClientMetaData(OptAnnotationMetaDataDeployer.java:105) at org.jboss.deployment.OptAnnotationMetaDataDeployer.processMetaData(OptAnnotationMetaDataDeployer.java:73) at org.jboss.deployment.AnnotationMetaDataDeployer.deploy(AnnotationMetaDataDeployer.java:192) ... 34 more 11:58:40,828 INFO [JMXConnectorServerService] JMX Connector server: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:10900/jmxconnector 11:58:46,500 INFO [MailService] Mail Service bound to java:/Mail 11:58:46,593 ERROR [NamingProviderURLWriter] Cannot create a naming service URL file at file:/G:/jboss-6.0.0.M2/server/default/data/jnp-service.url: java.io.IOException: Accès refusé at java.io.WinNTFileSystem.createFileExclusively(Native Method) at java.io.File.createNewFile(File.java:883) at org.jboss.naming.NamingProviderURLWriter.start(NamingProviderURLWriter.java:151) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.reflect.plugins.introspection.ReflectionUtils.invoke(ReflectionUtils.java:59) at org.jboss.reflect.plugins.introspection.ReflectMethodInfoImpl.invoke(ReflectMethodInfoImpl.java:151) at org.jboss.joinpoint.plugins.BasicMethodJoinPoint.dispatch(BasicMethodJoinPoint.java:66) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction$JoinpointDispatchWrapper.execute(KernelControllerContextAction.java:257) at org.jboss.kernel.plugins.dependency.ExecutionWrapper.execute(ExecutionWrapper.java:47) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction.dispatchExecutionWrapper(KernelControllerContextAction.java:125) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction.dispatchJoinPoint(KernelControllerContextAction.java:72) at org.jboss.kernel.plugins.dependency.LifecycleAction.installActionInternal(LifecycleAction.java:202) at org.jboss.kernel.plugins.dependency.InstallsAwareAction.installAction(InstallsAwareAction.java:54) at org.jboss.kernel.plugins.dependency.InstallsAwareAction.installAction(InstallsAwareAction.java:42) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:378) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2029) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1050) at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1289) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1213) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1107) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:873) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:620) at org.jboss.deployers.vfs.deployer.kernel.BeanMetaDataDeployer.deploy(BeanMetaDataDeployer.java:180) at org.jboss.deployers.vfs.deployer.kernel.BeanMetaDataDeployer.deploy(BeanMetaDataDeployer.java:58) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:55) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:179) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1660) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1378) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1399) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1319) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:378) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2029) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1050) at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1289) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1213) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1107) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:918) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:633) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:898) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:677) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:403) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:378) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2029) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1050) at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1289) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1213) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1107) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:873) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:620) at org.jboss.system.server.profileservice.repository.AbstractProfileService.registerProfile(AbstractProfileService.java:308) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:259) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:100) at org.jboss.bootstrap.impl.base.server.AbstractServer.startBootstraps(AbstractServer.java:860) at org.jboss.bootstrap.impl.base.server.AbstractServer$StartServerTask.run(AbstractServer.java:441) at java.lang.Thread.run(Thread.java:619) this application works fine on JBoss 4.2.2.GA, if someone have any idea, I need some help.

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • javascript robot

    - by sarah
    hey guys! I need help making this robot game in javascript (notepad++) please HELP! I'm really confused by the functions <html> <head><title>Robot Invasion 2199</title></head> <body style="text-align:center" onload="newGame();"> <h2>Robot Invasion 2199</h2> <div style="text-align:center; background:white; margin-right: auto; margin-left:auto;"> <div style=""> <div style="width: auto; border:solid thin red; text-align:center; margin:10px auto 10px auto; padding:1ex 0ex;font-family: monospace" id="scene"></pre> </div> <div><span id="status"></span></div> <form style="text-align:center"> PUT THE CONTROL PANEL HERE!!! </form> </div> <script type="text/javascript"> // GENERAL SUGGESTIONS ABOUT WRITING THIS PROGRAM: // You should test your program before you've finished writing all of the // functions. The newGame, startLevel, and update functions should be your // first priority since they're all involved in displaying the initial state // of the game board. // // Next, work on putting together the control panel for the game so that you // can begin to interact with it. Your next goal should be to get the move // function working so that everything else can be testable. Note that all nine // of the movement buttons (including the pass button) should call the move // function when they are clicked, just with different parameters. // // All the remaining functions can be completed in pretty much any order, and // you'll see the game gradually improve as you write the functions. // // Just remember to keep your cool when writing this program. There are a // bunch of functions to write, but as long as you stay focused on the function // you're writing, each individual part is not that hard. // These variables specify the number of rows and columns in the game board. // Use these variables instead of hard coding the number of rows and columns // in your loops, etc. // i.e. Write: // for(i = 0; i < NUM_ROWS; i++) ... // not: // for(i = 0; i < 15; i++) ... var NUM_ROWS = 15; var NUM_COLS = 25; // Scene is arguably the most important variable in this whole program. It // should be set up as a two-dimensional array (with NUM_ROWS rows and // NUM_COLS columns). This represents the game board, with the scene[i][j] // representing what's in row i, column j. In particular, the entries should // be: // // "." for empty space // "R" for a robot // "S" for a scrap pile // "H" for the hero var scene; // These variables represent the row and column of the hero's location, // respectively. These are more of a conveniece so you don't have to search // for the "H" in the scene array when you need to know where the hero is. var heroRow; var heroCol; // These variables keep track of various aspects of the gameplay. // score is just the number of robots destroyed. // screwdrivers is the number of sonic screwdriver charges left. // fastTeleports is the number of fast teleports remaining. // level is the current level number. // Be sure to reset all of these when a new game starts, and update them at the // appropriate times. var score; var screwdrivers; var fastTeleports; var level; // This function should use a sonic screwdriver if there are still charges // left. The sonic screwdriver turns any robot that is in one of the eight // squares immediately adjacent to the hero into scrap. If there are no charges // left, then this function should instead pop up a dialog box with the message // "Out of sonic screwdrivers!". As with any function that alters the game's // state, this function should call the update function when it has finished. // // Your "Sonic Screwdriver" button should call this function directly. function screwdriver() { // WRITE THIS FUNCTION } // This function should move the hero to a randomly selected location if there // are still fast teleports left. This function MUST NOT move the hero on to // a square that is already occupied by a robot or a scrap pile, although it // can move the hero next to a robot. The number of fast teleports should also // be decreased by one. If there are no fast teleports left, this function // should just pop up a message box saying so. As with any function that alters // the game's state, this function should call the update function when it has // finished. // // HINT: Have a loop that keeps trying random spots until a valid one is found. // HINT: Use the validPosition function to tell if a spot is valid // // Your "Fast Teleport" button s

    Read the article

  • Find the set of largest contiguous rectangles to cover multiple areas

    - by joelpt
    I'm working on a tool called Quickfort for the game Dwarf Fortress. Quickfort turns spreadsheets in csv/xls format into a series of commands for Dwarf Fortress to carry out in order to plot a "blueprint" within the game. I am currently trying to optimally solve an area-plotting problem for the 2.0 release of this tool. Consider the following "blueprint" which defines plotting commands for a 2-dimensional grid. Each cell in the grid should either be dug out ("d"), channeled ("c"), or left unplotted ("."). Any number of distinct plotting commands might be present in actual usage. . d . d c c d d d d c c . d d d . c d d d d d c . d . d d c To minimize the number of instructions that need to be sent to Dwarf Fortress, I would like to find the set of largest contiguous rectangles that can be formed to completely cover, or "plot", all of the plottable cells. To be valid, all of a given rectangle's cells must contain the same command. This is a faster approach than Quickfort 1.0 took: plotting every cell individually as a 1x1 rectangle. This video shows the performance difference between the two versions. For the above blueprint, the solution looks like this: . 9 . 0 3 2 8 1 1 1 3 2 . 1 1 1 . 2 7 1 1 1 4 2 . 6 . 5 4 2 Each same-numbered rectangle above denotes a contiguous rectangle. The largest rectangles take precedence over smaller rectangles that could also be formed in their areas. The order of the numbering/rectangles is unimportant. My current approach is iterative. In each iteration, I build a list of the largest rectangles that could be formed from each of the grid's plottable cells by extending in all 4 directions from the cell. After sorting the list largest first, I begin with the largest rectangle found, mark its underlying cells as "plotted", and record the rectangle in a list. Before plotting each rectangle, its underlying cells are checked to ensure they are not yet plotted (overlapping a previous plot). We then start again, finding the largest remaining rectangles that can be formed and plotting them until all cells have been plotted as part of some rectangle. I consider this approach slightly more optimized than a dumb brute-force search, but I am wasting a lot of cycles (re)calculating cells' largest rectangles and checking underlying cells' states. Currently, this rectangle-discovery routine takes the lion's share of the total runtime of the tool, especially for large blueprints. I have sacrificed some accuracy for the sake of speed by only considering rectangles from cells which appear to form a rectangle's corner (determined using some neighboring-cell heuristics which aren't always correct). As a result of this 'optimization', my current code doesn't actually generate the above solution correctly, but it's close enough. More broadly, I consider the goal of largest-rectangles-first to be a "good enough" approach for this application. However I observe that if the goal is instead to find the minimum set (fewest number) of rectangles to completely cover multiple areas, the solution would look like this instead: . 3 . 5 6 8 1 3 4 5 6 8 . 3 4 5 . 8 2 3 4 5 7 8 . 3 . 5 7 8 This second goal actually represents a more optimal solution to the problem, as fewer rectangles usually means fewer commands sent to Dwarf Fortress. However, this approach strikes me as closer to NP-Hard, based on my limited math knowledge. Watch the video if you'd like to better understand the overall strategy; I have not addressed other aspects of Quickfort's process, such as finding the shortest cursor-path that plots all rectangles. Possibly there is a solution to this problem that coherently combines these multiple strategies. Help of any form would be appreciated.

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 39 40 41 42 43 44  | Next Page >