Search Results

Search found 2135 results on 86 pages for 'fire hazard'.

Page 83/86 | < Previous Page | 79 80 81 82 83 84 85 86  | Next Page >

  • Back to Basics: When does a .NET Assembly Dependency get loaded

    - by Rick Strahl
    When we work on typical day to day applications, it's easy to forget some of the core features of the .NET framework. For me personally it's been a long time since I've learned about some of the underlying CLR system level services even though I rely on them on a daily basis. I often think only about high level application constructs and/or high level framework functionality, but the low level stuff is often just taken for granted. Over the last week at DevConnections I had all sorts of low level discussions with other developers about the inner workings of this or that technology (especially in light of my Low Level ASP.NET Architecture talk and the Razor Hosting talk). One topic that came up a couple of times and ended up a point of confusion even amongst some seasoned developers (including some folks from Microsoft <snicker>) is when assemblies actually load into a .NET process. There are a number of different ways that assemblies are loaded in .NET. When you create a typical project assemblies usually come from: The Assembly reference list of the top level 'executable' project The Assembly references of referenced projects Dynamically loaded at runtime via AppDomain/Reflection loading In addition .NET automatically loads mscorlib (most of the System namespace) the boot process that hosts the .NET runtime in EXE apps, or some other kind of runtime hosting environment (runtime hosting in servers like IIS, SQL Server or COM Interop). In hosting environments the runtime host may also pre-load a bunch of assemblies on its own (for example the ASP.NET host requires all sorts of assemblies just to run itself, before ever routing into your user specific code). Assembly Loading The most obvious source of loaded assemblies is the top level application's assembly reference list. You can add assembly references to a top level application and those assembly references are then available to the application. In a nutshell, referenced assemblies are not immediately loaded - they are loaded on the fly as needed. So regardless of whether you have an assembly reference in a top level project, or a dependent assembly assemblies typically load on an as needed basis, unless explicitly loaded by user code. The same is true of dependent assemblies. To check this out I ran a simple test: I have a utility assembly Westwind.Utilities which is a general purpose library that can work in any type of project. Due to a couple of small requirements for encoding and a logging piece that allows logging Web content (dependency on HttpContext.Current) this utility library has a dependency on System.Web. Now System.Web is a pretty large assembly and generally you'd want to avoid adding it to a non-Web project if it can be helped. So I created a Console Application that loads my utility library: You can see that the top level Console app a reference to Westwind.Utilities and System.Data (beyond the core .NET libs). The Westwind.Utilities project on the other hand has quite a few dependencies including System.Web. I then add a main program that accesses only a simple utillity method in the Westwind.Utilities library that doesn't require any of the classes that access System.Web: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); } StringUtils.NewStringId() calls into Westwind.Utilities, but it doesn't rely on System.Web. Any guesses what the assembly list looks like when I stop the code on the ReadLine() command? I'll wait here while you think about it… … … So, when I stop on ReadLine() and then fire up Process Explorer and check the assembly list I get: We can see here that .NET has not actually loaded any of the dependencies of the Westwind.Utilities assembly. Also not loaded is the top level System.Data reference even though it's in the dependent assembly list of the top level project. Since this particular function I called only uses core System functionality (contained in mscorlib) there's in fact nothing else loaded beyond the main application and my Westwind.Utilities assembly that contains the method accessed. None of the dependencies of Westwind.Utilities loaded. If you were to open the assembly in a disassembler like Reflector or ILSpy, you would however see all the compiled in dependencies. The referenced assemblies are in the dependency list and they are loadable, but they are not immediately loaded by the application. In other words the C# compiler and .NET linker are smart enough to figure out the dependencies based on the code that actually is referenced from your application and any dependencies cascading down into the dependencies from your top level application into the referenced assemblies. In the example above the usage requirement is pretty obvious since I'm only calling a single static method and then exiting the app, but in more complex applications these dependency relationships become very complicated - however it's all taken care of by the compiler and linker figuring out what types and members are actually referenced and including only those assemblies that are in fact referenced in your code or required by any of your dependencies. The good news here is: That if you are referencing an assembly that has a dependency on something like System.Web in a few places that are not actually accessed by any of your code or any dependent assembly code that you are calling, that assembly is never loaded into memory! Some Hosting Environments pre-load Assemblies The load behavior can vary however. In Console and desktop applications we have full control over assembly loading so we see the core CLR behavior. However other environments like ASP.NET for example will preload referenced assemblies explicitly as part of the startup process - primarily to minimize load conflicts. Specifically ASP.NET pre-loads all assemblies referenced in the assembly list and the /bin folder. So in Web applications it definitely pays to minimize your top level assemblies if they are not used. Understanding when Assemblies Load To clarify and see it actually happen what I described in the first example , let's look at a couple of other scenarios. To see assemblies loading at runtime in real time lets create a utility function to print out loaded assemblies to the console: public static void PrintAssemblies() { var assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in assemblies) { Console.WriteLine(assembly.GetName()); } } Now let's look at the first scenario where I have class method that references internally uses System.Web. In the first scenario lets add a method to my main program like this: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); PrintAssemblies(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } UpdateFromWebRequest() internally accesses HttpContext.Current to read some information of the ASP.NET Request object so it clearly needs a reference System.Web to work. In this first example, the method that holds the calling code is never called, but exists as a static method that can potentially be called externally at some point. What do you think will happen here with the assembly loading? Will System.Web load in this example? No - it doesn't. Because the WebLogEntry() method is never called by the mainline application (or anywhere else) System.Web is not loaded. .NET dynamically loads assemblies as code that needs it is called. No code references the WebLogEntry() method and so System.Web is never loaded. Next, let's add the call to this method, which should trigger System.Web to be loaded because a dependency exists. Let's change the code to: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.WriteLine("--- Before:"); PrintAssemblies(); WebLogEntry(); Console.WriteLine("--- After:"); PrintAssemblies(); Console.ReadLine(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } Looking at the code now, when do you think System.Web will be loaded? Will the before list include it? Yup System.Web gets loaded, but only after it's actually referenced. In fact, just until before the call to UpdateFromRequest() System.Web is not loaded - it only loads when the method is actually called and requires the reference in the executing code. Moral of the Story So what have we learned - or maybe remembered again? Dependent Assembly References are not pre-loaded when an application starts (by default) Dependent Assemblies that are not referenced by executing code are never loaded Dependent Assemblies are just in time loaded when first referenced in code All of this is nothing new - .NET has always worked like this. But it's good to have a refresher now and then and go through the exercise of seeing it work in action. It's not one of those things we think about everyday, and as I found out last week, I couldn't remember exactly how it worked since it's been so long since I've learned about this. And apparently I'm not the only one as several other people I had discussions with in relation to loaded assemblies also didn't recall exactly what should happen or assumed incorrectly that just having a reference automatically loads the assembly. The moral of the story for me is: Trying at all costs to eliminate an assembly reference from a component is not quite as important as it's often made out to be. For example, the Westwind.Utilities module described above has a logging component, including a Web specific logging entry that supports pulling information from the active HTTP Context. Adding that feature requires a reference to System.Web. Should I worry about this in the scope of this library? Probably not, because if I don't use that one class of nearly a hundred, System.Web never gets pulled into the parent process. IOW, System.Web only loads when I use that specific feature and if I am, well I clearly have to be running in a Web environment anyway to use it realistically. The alternative would be considerably uglier: Pulling out the WebLogEntry class and sticking it into another assembly and breaking up the logging code. In this case - definitely not worth it. So, .NET definitely goes through some pretty nifty optimizations to ensure that it loads only what it needs and in most cases you can just rely on .NET to do the right thing. Sometimes though assembly loading can go wrong (especially when signed and versioned local assemblies are involved), but that's subject for a whole other post…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • September 2012 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the September 2012 release of the Ajax Control Toolkit! This is the first release of the Ajax Control Toolkit which supports the .NET 4.5 framework. We also continue to support ASP.NET 3.5 and ASP.NET 4.0. With this release, we’ve made several important bug fixes. The Superexpert team focused on fixing the highest voted issues associated with the CascadingDropDown control. I’ve created a list of these bug fixes later in this blog post. You can download the latest release of the Ajax Control Toolkit by visiting the following page at CodePlex: http://AjaxControlToolkit.CodePlex.com Alternatively, you can install the latest version of the Ajax Control Toolkit using NuGet by firing off the following command from the Package Manager Console: Install-Package AjaxControlToolkit Using the Ajax Control Toolkit with ASP.NET 4.5 Let me walk through the steps for using the Ajax Control Toolkit with ASP.NET 4.5. First, I’ll create a new ASP.NET 4.5 website with Visual Studio 2012. I’ll create the new website with the ASP.NET Web Forms Application template: When you create a new ASP.NET 4.5 site with the ASP.NET Web Forms Application template, you get a starter website. If you run the site, then you get a page with default content: Let me show you how you can add the Ajax Control Toolkit Calendar control to the homepage of this starter site. The first step is to use NuGet to install the Ajax Control Toolkit. Right-click the References folder in the Solution Explorer window and select the menu option Manage NuGet Packages. In the Manage NuGet Packages dialog, use the search box to search for the Ajax Control Toolkit (enter “AjaxControlToolkit”). After you find it, click the Install button to add the Ajax Control Toolkit to your project. That’s all you have to do to install the Ajax Control Toolkit! Now we are ready to start using the Ajax Control Toolkit controls. Open the default.aspx page so we can modify the contents of the page. Erase everything contained in the Content control with the ID of BodyContent. After erasing the content, declare the following two controls: <asp:TextBox ID="vacationDate" runat="server" /> <ajaxToolkit:CalendarExtender TargetControlID="vacationDate" runat="server" /> The first control is a standard ASP.NET TextBox control and the second control is an Ajax Control Toolkit Calendar control. You should get intellisense as you type out the Ajax Control Toolkit Calendar control. If you don’t, then close and re-open the Default.aspx page. Now, let’s run our app. Hit the F5 button or select Debug, Start Debugging from the Visual Studio menu. You will get the error message “MsAjaxBundle is not a valid script name”. Don’t despair! We need to update the Master Page so it uses the ToolkitScriptManager instead of the default ScriptManager. Open the Site.Master file and find where the ScriptManager is declared. The ScriptManager should look like this: <asp:ScriptManager runat="server"> <Scripts> <%--Framework Scripts--%> <asp:ScriptReference Name="MsAjaxBundle" /> <asp:ScriptReference Name="jquery" /> <asp:ScriptReference Name="jquery.ui.combined" /> <asp:ScriptReference Name="WebForms.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebForms.js" /> <asp:ScriptReference Name="WebUIValidation.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebUIValidation.js" /> <asp:ScriptReference Name="MenuStandards.js" Assembly="System.Web" Path="~/Scripts/WebForms/MenuStandards.js" /> <asp:ScriptReference Name="GridView.js" Assembly="System.Web" Path="~/Scripts/WebForms/GridView.js" /> <asp:ScriptReference Name="DetailsView.js" Assembly="System.Web" Path="~/Scripts/WebForms/DetailsView.js" /> <asp:ScriptReference Name="TreeView.js" Assembly="System.Web" Path="~/Scripts/WebForms/TreeView.js" /> <asp:ScriptReference Name="WebParts.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebParts.js" /> <asp:ScriptReference Name="Focus.js" Assembly="System.Web" Path="~/Scripts/WebForms/Focus.js" /> <asp:ScriptReference Name="WebFormsBundle" /> <%--Site Scripts--%> </Scripts> </asp:ScriptManager> We need to make three changes to the ScriptManager: 1) We need to replace the asp:ScriptManager with the ajaxToolkit:ToolkitScriptManager 2) We need to remove the MsAjaxBundle bundle from the ScriptReferences 3) We need to remove the Assembly=”System.Web” attributes from the ScriptReferences After you make these three changes, the ToolkitScriptManager should looks like this: <ajaxToolkit:ToolkitScriptManager runat="server"> <Scripts> <%--Framework Scripts--%> <asp:ScriptReference Name="jquery" /> <asp:ScriptReference Name="jquery.ui.combined" /> <asp:ScriptReference Name="WebForms.js" Path="~/Scripts/WebForms/WebForms.js" /> <asp:ScriptReference Name="WebUIValidation.js" Path="~/Scripts/WebForms/WebUIValidation.js" /> <asp:ScriptReference Name="MenuStandards.js" Path="~/Scripts/WebForms/MenuStandards.js" /> <asp:ScriptReference Name="GridView.js" Path="~/Scripts/WebForms/GridView.js" /> <asp:ScriptReference Name="DetailsView.js" Path="~/Scripts/WebForms/DetailsView.js" /> <asp:ScriptReference Name="TreeView.js" Path="~/Scripts/WebForms/TreeView.js" /> <asp:ScriptReference Name="WebParts.js" Path="~/Scripts/WebForms/WebParts.js" /> <asp:ScriptReference Name="Focus.js" Path="~/Scripts/WebForms/Focus.js" /> <asp:ScriptReference Name="WebFormsBundle" /> <%--Site Scripts--%> </Scripts> </ajaxToolkit:ToolkitScriptManager> After we make these changes, the app should run successfully. You’ll get a page which contains a text field. When you click inside the text field, a popup calendar is displayed. Ajax Control Toolkit and jQuery You might have noticed that the ScriptManager includes a reference to jQuery by default. We did not remove that reference when we converted the ScriptManager to a ToolkitScriptManager. You can use the Ajax Control Toolkit and jQuery side-by-side. Here’s how you can modify the Default.aspx page so that it contains two popup calendars. The first popup calendar is created with the Ajax Control Toolkit and the second popup calendar is created with jQuery: <asp:TextBox ID="vacationDate" runat="server" /> <ajaxToolkit:CalendarExtender TargetControlID="vacationDate" runat="server" /> <input id="birthDate" /> <script> $("#birthDate").datepicker(); </script> Before you can start using jQuery UI plugins, you need to complete one more step. You need to add the jQuery UI themes bundle to the HEAD of the Site.Master page like this: <head runat="server"> <meta charset="utf-8" /> <title><%: Page.Title %> - My ASP.NET Application</title> <asp:PlaceHolder runat="server"> <%: Scripts.Render("~/bundles/modernizr") %> </asp:PlaceHolder> <webopt:BundleReference runat="server" Path="~/Content/css" /> <webopt:BundleReference runat="server" Path="~/Content/themes/base/css" /> <link href="~/favicon.ico" rel="shortcut icon" type="image/x-icon" /> <meta name="viewport" content="width=device-width" /> <asp:ContentPlaceHolder runat="server" ID="HeadContent" /> </head> The markup above includes a reference to the jQuery UI themes bundle: <webopt:BundleReference runat="server" Path="~/Content/themes/base/css" /> Now that we have made these changes, we can use the Ajax Control Toolkit and jQuery at the same time. When you run your app, you get two popup calendars. When you click in the first text field, the Ajax Control Toolkit calendar appears. When you click in the second text field, the jQuery UI popup calendar appears: Bug Fixes in this Release We made several important bug fixes with this release of the Ajax Control Toolkit and integrated several Pull Requests contributed by the community. Our primary focus during this sprint was fixing issues with the CascadingDropDown control. We fixed the following issues associated with the CascadingDropDown: · 9490 – Don’t disable dropdowns in CascadingDropDown · 14223 – CascadingDropDown Reset or Setting SelectedValue from WebMethod · 12189 – CascadingDropDown not obeying disabled state of DropDownList · 22942 – CascadingDropDown infinite loop (with solution) · 8671 – CascadingDropdown options is null or undefined · 14407 – CascadingDropDown: populated client event happens too often · 17148 – CascadingDropDown – Add “UseHttpGet” property · 10221 – No NotNull check in CascadingDropDown · 12228 – Provide property for case-insensitive DefaultValue lookup in CascadingDropdown We also fixed the following two issues which are not directly related to the CascadingDropDown control: · 27108 – CalendarExtender: Bug when selecting December shifts to January. · 27041 – Input controls with HTML5 types do not post back in Firefox, Chrome, Safari Finally, we integrated several Pull Requests submitted by the community (Thank you community!): · Added French localized resources for the AjaxFileUpload · Resolved an issue which prevented the AjaxFileUpload control from working with pages that require query string variables. · Extended the AjaxFileUploadEventArgs class to include the current file index in the queue and the total number of files in the queue. · Fixed an issue with TabContainer and TabPanel which caused the OnActiveTabChanged event to fire too often. Summary I’m happy to see the Ajax Control Toolkit move forward into the brave new world of ASP.NET 4.5! In this latest release, we focused on ensuring that the Ajax Control Toolkit works smoothly with ASP.NET 4.5 applications. We also fixed the highest voted bugs associated with the CascadingDropDown control and integrated several Pull Request submitted by the community. Once again, I want to thank the Superexpert team for their hard work on this release!

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Tuesday, April 27, 2010

    CodePlex Daily Summary for Tuesday, April 27, 2010New ProjectsActive Directory User Properties Change: A complete application in VS 2005 and VB.NET, for Request Request in User Details in Active Directory, with flow to HR and then to IT for approval ...AVR Terminal: A Windows application for connecting to an AVR via RS232 serial or USB-to-COM FTDI ports. Works on Arduino, Bare Bones Board, and any custom board...Battle Droids: AVR-based Network Combat!: A Battle Droid is an AVR® microcontroller running the BattleDroid firmware. This firmware turns your AVR into a lean, mean, fighting machine, and ...Camp Foundation: Camp Foundationchakma: chakma is a question - answer based web application to make people get questions from anybody around the world and being able to answer them. c...Document.Editor: Document.Editor is a multitab text editor for Windows. It includes plain and rich text format support, multi tab interface so you can edit multiple...Dot Net Marche Music Store Demo Application: This is a demo application that the DotNetMarche user gorup (www.dotnetmarche.org) use to make experiments and prepare demos for our workshopselivators: a monitor which enables the user to view the movement of the elivators in a buildingExtended SSIS Package Execute: The SSIS package execute task is flawed as it does not support passing variables. Here we have a custom task that will pass items in a dataflow as...File tools: File toolsFileExplorer.NET: FileExplorer.NET is a .net usercontrol which tries to mimic the Windows FileExplorer treeview.Kazuku: ASP.NET MVC 2 Content Management SystemKSharp Ajax Control Toolkit Library: Built ontop of the Microsoft ASP.NET Ajax Control Toolkit, this library offers enhanced versions of the controls found in the Ajax Control Toolkit....Nitrous - An Aspx ViewEngine for ASP.NET MVC: Near drop-in replacement ASP.NET ViewEngine for MVC.Open Data Protocol - Client Libraries: This is an Open Source release of the .NET and Silverlight Client Libraries for the Open Data Protocol (OData). For more information on odata, see ...ORAYLIS BI.SmartDiff: BI.SmartDiff is a helper to connect the functionality of BIDS Helper – SmartDiff to TortoiseSVN. BIDS Helper – SmartDiff helps you to get more read...RicciWebSiteSystem: soon websiteSynapse:Silverlight A Simple Silverlight Framework: Synapse:Silverlight is a simplified framework for Silverlight. It's purpose is to help developers and designers produce basic LOB solutions that do...TestProjectMB: Testing Team Foundation ServerThoughtWorks Cruise Notification Interceptor: Cruise notification interceptorThreadSafeControls: ThreadSafeControls is a C# project that greatly simplifies the process of transitioning Windows Forms applications to a multithreaded environment b...Unscrambler: Unscrambler is a multitouch WPF word game built with MVVM Light in order to show how to use the touch maniupation and inertia features included in ...Web Utilities: web utilitiesNew Releases7zbackup - PowerShell Script to Backup Files with 7zip: 7zBackup v. 1.7.1 Stable: Bug Solved : Presence of junction.exe is wrongly referred to 7z.exeAVR Terminal: AVR Terminal v0.2: Here is an Alpha-almost-BETA release of the AVR Terminal. That being said, I use it almost daily and it shouldn't break anything on your system, b...Bistro FSharp Extensions: 0.9.7.0: This is the VS 2010 release of BistroFS extensions. This release focused on usability, adding key functionality such as resource aliasing and secur...Bojinx: Bojinx Dialog Management V1.0: Stable release of the Bojinx Dialog Management library.BOWIE: BOWIE 2010: This new version works on Outlook 2007/2010 and TFS 2008/2010 RTM. Details about all features in this version on the Home Page : http://bowie.code...Catharsis: Catharsis 2.5 on catarsa.com: The Catharsis framework has finally its own portal http://catarsa.com Example - documented steps to create Web-Application http://catarsa.com/Arti...Colorful Expression: Expression Blend 3: Alpha Version, Read Issues and Installing! Colorful Expression is an add-in for Expression Blend and Expression Design that brings you the Adobe K...Colorful Expression: Expression Blend 4: Read Issues and Installing! Colorful Expression is an add-in for Expression Blend and Expression Design that brings you the Adobe Kuler and ColorLo...Courier: Version 1.0: This release includes integration with the Reactive Framework for more elegant message handling and allowing more succinct client code. Full suite...CRM 4.0 Contract Utilities: Release 1.0: Project Description List of Contract Utilities (i.e. custom workflow actions) 1. Change contract status from Active to Draft 2. Copy Contract (with...Document.Editor: 0.9.0: Whats New?: New icon set Bug fix'sDotNetNuke® Blog: 04.00.00: Minimum Required DNN Version: 4.06.02General Code organization * Converted project to .NET 3.5 * Converted solution to Visual Stud...EPiAbstractions: EPiAbstractions 1.2: Updated for EPiServer CMS 6. Only features abstractions for EPiServer CMS. For abstractions for EPiServer.Common and EPiServer.Community use versio...Fluent ViewModel Configuration for WPF (MVVM): FluentViewModel Alpha2: Added support for view model validation using FluentValidation (http://fluentvalidation.codeplex.com/) Fixed exception from Blend while in design...GArphics: Beta v0.9: Beta v0.9. Practically all of the planned features have been implemented and are available to the users. For the version 1.0 mainly just some minor...HTML Ruby: 6.22.2.1: Fixed a bug where HTML Ruby's options window will generate entries in the error log when applying option changes (regression from 6.21.8)HTML Ruby: 6.22.3: Add/remove stop spacing event listener as needed for possible fix to 4620iTuner - The iTunes Companion: iTuner 1.2.3768 Beta 3b: Beta 3 requires iTunes 9.1.0.79 or later A Librarian status panel showing active and queued Librarian scanners. This will be hidden behind the "bi...LiveUpload to Facebook: LiveUpload to Facebook 3.2.3: Version 3.2.3Become a fan on Facebook! Features Quickly and easily upload your photos and videos to Facebook, including any people tags added in W...Maintainance Schedule: Maintenance Scheduler: The first Alpha release of the project.NetSockets: NetSockets (1.2): The NetSockets library (DLL)NSIS Autorun: NSIS Autorun 0.1.2: NSIS Autorun 0.1.1 This release includes source code, application binary, and example materials.OpenSceneGraph glsl samples: OsgGlslSamples Win32 binaries: Project binary release for Windows. The effects shown are: Ambient Occlusion, Depth of Field, DoF with alpha channel, Fire effects, HDR, Light Ma...ORAYLIS BI.SmartDiff: ORAYLIS BI.SmartDiff 0.6.1: First public versionpatterns & practices - Windows Azure Guidance: Code Drop 4 - Content Complete: This release includes documentation and all code samples intended for this first guide. As before, this code release builds on the previous one an...Pex Custom Arithmetic Solver: Custom Solver Package: This is the custom solvers packaged together. To use simply include the dll in your project and add [assembly: PexCustomArithmeticSolver] to your P...PokeIn Comet Ajax Library: PokeIn v08 x86: New FeatureFrom this version forward, PokeIn will define a way between the main page and client side automaticly based to security level. Add "pub...Proxi [Proxy Interface]: Proxi Release 1.0.0.426: Proxi Release 1.0.0.426QuestTracker: QuestTracker 0.3: This release includes recurring quests! Now you can set a quest to uncomplete itself every X minutes, hours, or days! And the quests still retain t...Rensea Image Viewer: RIV 0.4.5: RIV Fix Version. You would need .NET Framework 4.0 to make it run RIVU Improved Version. With separated RIV up-loader, to upload images to Renjian...SCC Switch Provider: Provides a GUI to Switch Source Code Control Provi: Transferred from GotDotNet Workplace. Initial public Release. Downloaded ~922 times from original post.sTASKedit: sTASKedit v0.7a (Alpha): + Fixed: XOR text encoding + Fixed: adding timed rewards missing values + Fixed: occupations in clone()Synapse:Silverlight A Simple Silverlight Framework: Synapse Silverlight Alpha Release: Initial Road-map is being defined.ThoughtWorks Cruise Notification Interceptor: 1.0.0: Initial release.UDC indexes parser: UDC indexex parser Beta 2: Добавлена возможность работать с распределением определителей как если бы генератор был бы LALR(2) То что осталось: Если текстовое дополнение начи...Unscrambler: Release 1.0: Here's the first release of Unscrambler.WinXound: WinXound 3.3.0 Beta 2 for Mac OsX: New: Code Repository (for UDO and personal code) New: Format Code - Added the ability to format only the selected text of the code New: Explore...WPF Inspirational Quote Management System: Release 1.2.2: - Fixed issue some users were having when the application is minimised.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitSilverlight Toolkitpatterns & practices – Enterprise LibraryMicrosoft SQL Server Product Samples: DatabaseWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrGMap.NET - Great Maps for Windows Forms & PresentationParticle Plot PivotBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleFarseer Physics EngineIonics Isapi Rewrite FilterN2 CMSDotNetZip Library

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • CodePlex Daily Summary for Thursday, April 22, 2010

    CodePlex Daily Summary for Thursday, April 22, 2010New ProjectsAllegiance Modulus: - Display a list of all mods (installed/not installed/downloadable) - Allow user to install a mod - Allow a user to uninstall a mod - Keep backups ...Chatterbot JBot: PL: Program sieciowy JBot jest chatterbotem. EN: Network program JBot is chatterbot.Composite WPF Extensions: Extensions to Composite Client Application Library for WPFDwarrowdelf: Game concept in progressEntourage FrameG: Entourage lets users quickly and easily edit and create text . You will no longer have to download and install huge files and Entourage is 0% overw...FMon: file monGeckoBrowser: GeckoBrowser is a plugin for the great HTPC software MediaPortal. GeckoBrowser is a integrated WebBrowser for MediaPortal. It uses the Firefox (Gec...General Watcher: Watches things from the config file.GeoUtility Library: GeoUtility is an easy to use coordinate conversion library. It can be used for desktop/web development in CLI implementations like .NET, MONO. Supp...HidLib C++/CLR: HibLib is a USB Hid Communications Library written in C++/CLR IJW for the closest library you can get to a native USB Hid library. The project curr...HTML Shot: Server side component that generates a png/jpg image from arbitrary html code sent from the browser. Provides a quick way to enable printing arbit...MetaTagger: Core: MetaTagger: Core is a core set of meta data tagging libraries for use with .Net applicationsMUD--: MUD-- is a remake of MUD++ which never left devolution. MUD-- is a OOP oriented C++ MUD library, it is still in the planning stage. OpenLigaDB-Databrowser: A Silverlight 4-based Databrowser for the Community-Sportsdata-Webservice www.OpenLigaDB.deReduce Image to Specified Black Pixel Count: Takes in a picure path and an int n, and saves a white bitmap with the darkest n pixels from the image black. Posting this so that I can referen...Salient.MachineKeyUtils: Wraps encryption and password related functions from MachineKey, CookieProtectionHelper and MembershipProvider for use in other scenarios.Silverlight WebRequestHelper: WebRequestHelper is a very simple helper project, created for using HttWebRequest in Silverlight in a very simple and easiest way.Splinger FrameXi: Splinger FrameXi makes spelling a doddle with a huge and getting larger directory with LOADS of people contributing to make it the best in its field!SQL Server and SQL Azure Performance Testing: Enzo SQL Baseline: Run SQL statements in SQL Azure (or SQL Server) and capture performance metrics of your SQL statements (reads, writes, CPU...) against multiple dat...Sql Utils: A series of Java-based SQL import/export utilsSuggested Resources for .NET Developers: Suggested Resources is a proof of concept in aggregation of online content inside Visual Studio and analysis of a developers work, in order to sugg...SupportRoot: SupportRoot is a minimal helpdesk ticketing system focusing on speed and efficiency.Translate !t: Translate !t translates Image/Text containing English, German, French & Spanish to many different languages using Bing Translator. WCF Lab: To demonstrate different connectivity scenarios using WCF servicesWebAPP - Automated Perl Portal: Web portal system written in Perl. Full featured and multilingual.WIM4LAB: Laboratory Information Management System. ASP.NET C# MSSQL2005New Releases3D TagCloud for SharePoint 2010: 3D TagCloud v1.0: This realease contains the webpart itself.Bluetooth Radar: Version 2.1: Fix - "Right Click Crashes the application" bug Change OBX to push send Add current bluetooth device information + change device radiomode Ad...DirectQ: Release 1.8.3b: Contains updates and improvements to 1.8.3a. This should really be 1.8.4 given the extent of the changes, but I don't want to confuse a version nu...DotNetNuke® Store: 02.01.32 RC: What's New in this release? New Features: - A new setting 'Secure Cookie' in the Store Admin module allow to encrypt cookie values. Only the curren...Entourage FrameG: entourage frameg 1.0: Complete starter for entourageframeg.html enclosed as startentouragehtml.html and here you can find test CSS files and more! EXTRACT ALL FILES FROM...Entourage FrameG: FIMYID PRO: fimyid- find-my-id. YOU MUST download hidden.js and have the web server .pl file ready!Event Scavenger: Viewer version 3.0.1: Fixed an issue with the Viewer with highlighting not working properly (due to code rearranging from old version to new one for CodePlex). Viewer ve...FMon: First Edition: First EditionFolder Bookmarks: Folder Bookmarks 1.5.6: This is the latest version of Folder Bookmarks (1.5.6), with the new Quick Add feature and bug fixes. It has an installer that will create a direct...GameStore League Manager: League Manager 1.0.6: Bug fixes for bugs found in 1.0.5 and earlier. Fixes the crashing bug when a membership number of more than 6 digits is entered. Changes the dat...GeckoBrowser: GeckoBrowser v0.1 - RAR Package: GeckoBrowser Release v0.1.0.2 Please read Plugin installation for installation instructions.HTML Shot: Initial Source Code Release: Zip file includes a VS 2008 solution with two projects HTMLShot DLL project HTMLShot sample websiteMapWindow6: MapWindow 6.0 msi April 21: This version includes the latest bug fixes. This also includes the beginnings of some fixes that update the projection library to return NaN value...METAR.NET Decoder: Release 0.5.x (replacement of 0.4.x): Release 0.4.x was upgraded to 0.5.x due to major error issue included in previous one. Release Notes First public release. Main of the application...MongoMvc - A NoSQL Demo App with ASP.NET MVC: MongoMVC: A NoSql demo app using MongoDB, NoRM and ASP.NET MVCPanBrowser: 1.2.1: updated to FSharp 2.0.0.0PokeIn Comet Ajax Library: Chat Sample of PokeIn CS2010: C# Sample of PokeIn. You need to download the latest PokeIn Library and add to project as a reference to run this samplePokeIn Comet Ajax Library: Chat Sample VB 2010: VB.NET Sample of PokeIn. You need to download the latest PokeIn Library and add to project as a reference to run this sampleProject Santa: Project Santa v1.1: fixed some errors created from last minute adjustments. progress bar is now completely functional. added much more error checking. cleaned up some...RoTwee: RoTwee (11.0.0.1): Version for update to .NET Framework 4.0Sem.GenericTools.ProjectSettings: 2010-04-21 ProjectSettings Ex-Importer: Exports and imports project settings from/to CSV files. This release does fix the issue of missing/misordered build types in project files. Also th...Sharepoint Permissions Manager: Version 0.2: Added support of both WSS3.0 and MOSS2007Silverlight WebRequestHelper: WebRequestHelper 1.0: The usage of this project is so simple. all you need to do is following: WebRequest webRequest = WebRequest.Create("http://api.twitte...SilverlightFTP: SilverlightFTP Beta ENG: English version, fixed copy-paste.sNPCedit: sNPCedit v0.8b (Alpha): + Added: support for version 5 (will be imported and saved as version 10) + Fixed: order of chinese week days in event section (week starts with su...Splinger FrameXi: Splinger FrameXi 1.0: DOWNLOAD ALLLL FILES and start in Splinger FrameXi.html . EXTRACT ALL FILES FROM .ZIP ARCHIVE!!!!SQL Server and SQL Azure Performance Testing: Enzo SQL Baseline: Enzo SQL Baseline 1.0: Use this download to install and deploy the sample application and its source code. The installer gives you the option to install the Windows appli...StoreManagement: v1: First (and probably final) version. Should be stable.TFS WitAdminUI: some bug fixed: When project name includes empty space, error fire. So i fix. Download zip file and unzip to TFS2010 or TFS2008. And Excute WitAdminUI.exe. Becaus...TFTP Server: TFTP Server 1.1 Installer: Release 1.1 of the Managed TFTP Server. New Features: Runs as windows service. Supports multiple TFTP servers on different endpoints, each servi...Thinktecture.IdentityModel: Thinktecture.IdentityModel v0.8: Updated version - includes the plumbing for REST/OData Services and UI authorization.Translate !t: Translate !t[Setup]: Translate !tSetupTranslate !t: Translate !t[Source Code]: Translate !tSource CodeWatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.10.02: BETA small fixesWeb Service Software Factory: Web Service Software Factory 2010 Beta: To use the Web Service Software Factory 2010, you need the following software installed on your computer: • Microsoft Visual Studio 2010 (Ultima...Windows Workflow Foundation on Codeplex: WF ADO.NET Activity Pack CTP 1: WF ADO.NET Activity Pack CTP 1 The Microsoft WF ADO.NET Activity Pack CTP 1 is the first community technology preview (CTP) release of ADO.NET acti...Windows Workflow Foundation on Codeplex: WF State Machine Activity Pack CTP 1: WF State Machine Activity Pack CTP 1The Microsoft WF State Machine Activity Pack CTP 1 is the first community technology preview (CTP) release of a...WPF Inspirational Quote Management System: Release 1.2.0: - Fixed non-working delete quote button. - Changed layout of quote edit page, font and colour. - Lightened background colour of settings page.WPF Inspirational Quote Management System: Release 1.2.1: - Only display an underline under Author links if a Reference URL is present.xlVBADevTools: 0.1 Two mostly inadequate tools...: Since I was blogging about it (), it seemed appropriate to upload LOCutus, in all its 2002 "glory" (hah!) and make it available for download.XP-More: 0.9.5: Added support for saved state files (.vsv) Added version updates check Added a check to make sure the assumed VM folder exists (this will be do...Most Popular ProjectsRawrWBFS ManagerSilverlight ToolkitAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryRawrBlogEngine.NETFarseer Physics EngineDotNetZip LibraryNB_Store - Free DotNetNuke Ecommerce Catalog ModulePHPExcelGMap.NET - Great Maps for Windows Forms & PresentationIonics Isapi Rewrite FilterEsferatec.Text.RegularExpressions

    Read the article

  • Dotfuscator Deep Dive with WP7

    - by Bil Simser
    I thought I would share some experience with code obfuscation (specifically the Dotfuscator product) and Windows Phone 7 apps. These days twitter is a buzz with black hat and white operations coming out about how the marketplace is insecure and Microsoft failed, blah, blah, blah. So it’s that much more important to protect your intellectual property. You should protect it no matter what when releasing apps into the wild but more so when someone is paying for them. You want to protect the time and effort that went into your code and have some comfort that the casual hacker isn’t going to usurp your next best thing. Enter code obfuscation. Code obfuscation is one tool that can help protect your IP. Basically it goes into your compiled assemblies, rewrites things at an IL level (like renaming methods and classes and hiding logic flow) and rewrites it back so that the assembly or executable is still fully functional but prying eyes using a tool like ILDASM or Reflector can’t see what’s going on.  You can read more about code obfuscation here on Wikipedia. A word to the wise. Code obfuscation isn’t 100% secure. More so on the WP7 platform where the OS expects certain things to be as they were meant to be. So don’t expect 100% obfuscation of every class and every method and every property. It’s just not going to happen. What this does do is give you some level of protection but don’t put all your eggs in one basket and call it done. Like I said, this is just one step in the process. There are a few tools out there that provide code obfuscation and support the Windows Phone 7 platform (see links to other tools at the end of this post). One such tool is Dotfuscator from PreEmptive solutions. The thing about Dotfuscator is that they’ve struck a deal with Microsoft to provide a *free* copy of their commercial product for Windows Phone 7. The only drawback is that it only runs until March 31, 2010. However it’s a good place to start and the focus of this article. Getting Started When you fire up Dotfuscator you’re presented with a dialog to start a new project or load a previous one. We’ll start with a new project. You’re then looking at a somewhat blank screen that shows an Input tab (among others) and you’re probably wondering what to do? Click on the folder icon (first one) and browse to where your xap file is. At this point you can save the project and click on the arrow to start the process. Bam! You’re done. Right? Think again. The program did indeed run and create a new version of your xap (doing it’s thing and rewriting back your *obfuscated* assemblies) but let’s take a look at the assembly in Reflector to see the end result. Remember a xap file is really just a glorified zip file (or cab file if you prefer). When you ran Dotfuscator for the first time with the default settings you’ll see it created a new version of your xap in a folder under “My Documents” called “Dotfuscated” (you can configure the output directory in settings). Here’s the new xap file. Since a xap is just a zip, rename it to .cab or .zip or something and open it with your favorite unarchive program (I use WinRar but it doesn’t matter as long as it can unzip files). If you already have the xap file associated with your unarchive tool the rename isn’t needed. Once renamed extract the contents of the xap to your hard drive: Now you’ll have a folder with the contents of the xap file extracted: Double click or load up your assembly (WindowsPhoneDataBoundApplication1.dll in the example) in Reflector and let’s see the results: Hmm. That doesn’t look right. I can see all the methods and the code is all there for my LoadData method I wanted to protect. Product failure. Let’s return it for a refund. Hold your horses. We need to check out the settings in the program first. Remember when we loaded up our xap file. It started us on the Input tab but there was a settings tab before that. Wonder what it does? Here’s the default settings: Renaming Taking a closer look, all of the settings in Feature are disabled. WTF? Yeah, it leaves me scratching my head why an obfuscator by default doesn’t obfuscate. However it’s a simple fix to change these settings. Let’s enable Renaming as it sounds like a good start. Renaming obscures code by renaming methods and fields to names that are not understandable. Great. Run the tool again and go through the process of unzipping the updated xap and let’s take a look in Reflector again at our project. This looks a lot better. Lots of methods named a, b, c, d, etc. That’ll help slow hackers down a bit. What about our logic that we spent days weeks on? Let’s take a look at the LoadData method: What gives? We have renaming enabled but all of our code is still there. If you look through all your methods you’ll find it’s still sitting there out in the open. Control Flow Back to the settings page again. Let’s enable Control Flow now. Control Flow obfuscation synthesizes branching, conditional, and iterative constructs (such as if, for, and while) that produce valid executable logic, but yield non-deterministic semantic results when decompilation is attempted. In other words, the code runs as before, but decompilers cannot reproduce the original code. Do the dance again and let’s see the results in Reflector. Ahh, that’s better. Methods renamed *and* nobody can look at our LoadData method now. Life is good. More than Minimum This is the bare minimum to obfuscate your xap to at least a somewhat comfortable level. However I did find that while this worked in my Hello World demo, it didn’t work on one of my real world apps. I had to do some extra tweaking with that. Below are the screens that I used on one app that worked. I’m not sure what it was about the app that the approach above didn’t work with (maybe the extra assembly?) but it works and I’m happy with it. YMMV. Remember to test your obfuscated app on your device first before submitting to ensure you haven’t obfuscated the obfuscator. settings tab: rename tab: string encryption tab: premark tab: A few final notes Play with the settings and keep bumping up the bar to try to get as much obfuscation as you can. The more the better but remember you can overdo it. Always (always, always, always) deploy your obfuscated xap to your device and test it before submitting to the marketplace. I didn’t and got rejected because I had gone overboard with the obfuscation so the app wouldn’t launch at all. Not everything is going to be obfuscated. Specifically I don’t see a way to obfuscate auto properties and a few other language features. Again, if you crank the settings up you might hide these but I haven’t spent a lot of time optimizing the process. Some people might say to obfuscate your xaml using string encryption but again, test, test, test. Xaml is picky so too much obfuscation (or any) might disable your app or produce odd rendering effets. Remember, obfuscation is not 100% secure! Don’t rely on it as a sole way of protecting your assets. Other Tools Dotfuscator is one just product and isn’t the end-all be-all to obfuscation so check out others below. For example, Crypto can make it so Reflector doesn’t even recognize the app as a .NET one and won’t open it. Others can encrypt resources and Xaml markup files. Here are some other obfuscators that support the Windows Phone 7 platform. Feel free to give them a try and let people know your experience with them! Dotfuscator Windows Phone Edition Crypto Obfuscator for .NET DeepSea Obfuscation

    Read the article

  • C#/.NET Little Wonders: The EventHandler and EventHandler&lt;TEventArgs&gt; delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last two weeks, we examined the Action family of delegates (and delegates in general), and the Func family of delegates and how they can be used to support generic, reusable algorithms and classes. So this week, we are going to look at a handy pair of delegates that can be used to eliminate the need for defining custom delegates when creating events: the EventHandler and EventHandler<TEventArgs> delegates. Events and delegates Before we begin, let’s quickly consider events in .NET.  According to the MSDN: An event in C# is a way for a class to provide notifications to clients of that class when some interesting thing happens to an object. So, basically, you can create an event in a type so that users of that type can subscribe to notifications of things of interest.  How is this different than some of the delegate programming that we talked about in the last two weeks?  Well, you can think of an event as a special access modifier on a delegate.  Some differences between the two are: Events are a special access case of delegates They behave much like delegates instances inside the type they are declared in, but outside of that type they can only be (un)subscribed to. Events can specify add/remove behavior explicitly If you want to do additional work when someone subscribes or unsubscribes to an event, you can specify the add and remove actions explicitly. Events have access modifiers, but these only specify the access level of those who can (un)subscribe A public event, for example, means anyone can (un)subscribe, but it does not mean that anyone can raise (invoke) the event directly.  Events can only be raised by the type that contains them In contrast, if a delegate is visible, it can be invoked outside of the object (not even in a sub-class!). Events tend to be for notifications only, and should be treated as optional Semantically speaking, events typically don’t perform work on the the class directly, but tend to just notify subscribers when something of note occurs. My basic rule-of-thumb is that if you are just wanting to notify any listeners (who may or may not care) that something has happened, use an event.  However, if you want the caller to provide some function to perform to direct the class about how it should perform work, make it a delegate. Declaring events using custom delegates To declare an event in a type, we simply use the event keyword and specify its delegate type.  For example, let’s say you wanted to create a new TimeOfDayTimer that triggers at a given time of the day (as opposed to on an interval).  We could write something like this: 1: public delegate void TimeOfDayHandler(object source, ElapsedEventArgs e); 2:  3: // A timer that will fire at time of day each day. 4: public class TimeOfDayTimer : IDisposable 5: { 6: // Event that is triggered at time of day. 7: public event TimeOfDayHandler Elapsed; 8:  9: // ... 10: } The first thing to note is that the event is a delegate type, which tells us what types of methods may subscribe to it.  The second thing to note is the signature of the event handler delegate, according to the MSDN: The standard signature of an event handler delegate defines a method that does not return a value, whose first parameter is of type Object and refers to the instance that raises the event, and whose second parameter is derived from type EventArgs and holds the event data. If the event does not generate event data, the second parameter is simply an instance of EventArgs. Otherwise, the second parameter is a custom type derived from EventArgs and supplies any fields or properties needed to hold the event data. So, in a nutshell, the event handler delegates should return void and take two parameters: An object reference to the object that raised the event. An EventArgs (or a subclass of EventArgs) reference to event specific information. Even if your event has no additional information to provide, you are still expected to provide an EventArgs instance.  In this case, feel free to pass the EventArgs.Empty singleton instead of creating new instances of EventArgs (to avoid generating unneeded memory garbage). The EventHandler delegate Because many events have no additional information to pass, and thus do not require custom EventArgs, the signature of the delegates for subscribing to these events is typically: 1: // always takes an object and an EventArgs reference 2: public delegate void EventHandler(object sender, EventArgs e) It would be insane to recreate this delegate for every class that had a basic event with no additional event data, so there already exists a delegate for you called EventHandler that has this very definition!  Feel free to use it to define any events which supply no additional event information: 1: public class Cache 2: { 3: // event that is raised whenever the cache performs a cleanup 4: public event EventHandler OnCleanup; 5:  6: // ... 7: } This will handle any event with the standard EventArgs (no additional information).  But what of events that do need to supply additional information?  Does that mean we’re out of luck for subclasses of EventArgs?  That’s where the generic for of EventHandler comes into play… The generic EventHandler<TEventArgs> delegate Starting with the introduction of generics in .NET 2.0, we have a generic delegate called EventHandler<TEventArgs>.  Its signature is as follows: 1: public delegate void EventHandler<TEventArgs>(object sender, TEventArgs e) 2: where TEventArgs : EventArgs This is similar to EventHandler except it has been made generic to support the more general case.  Thus, it will work for any delegate where the first argument is an object (the sender) and the second argument is a class derived from EventArgs (the event data). For example, let’s say we wanted to create a message receiver, and we wanted it to have a few events such as OnConnected that will tell us when a connection is established (probably with no additional information) and OnMessageReceived that will tell us when a new message arrives (probably with a string for the new message text). So for OnMessageReceived, our MessageReceivedEventArgs might look like this: 1: public sealed class MessageReceivedEventArgs : EventArgs 2: { 3: public string Message { get; set; } 4: } And since OnConnected needs no event argument type defined, our class might look something like this: 1: public class MessageReceiver 2: { 3: // event that is called when the receiver connects with sender 4: public event EventHandler OnConnected; 5:  6: // event that is called when a new message is received. 7: public event EventHandler<MessageReceivedEventArgs> OnMessageReceived; 8:  9: // ... 10: } Notice, nowhere did we have to define a delegate to fit our event definition, the EventHandler and generic EventHandler<TEventArgs> delegates fit almost anything we’d need to do with events. Sidebar: Thread-safety and raising an event When the time comes to raise an event, we should always check to make sure there are subscribers, and then only raise the event if anyone is subscribed.  This is important because if no one is subscribed to the event, then the instance will be null and we will get a NullReferenceException if we attempt to raise the event. 1: // This protects against NullReferenceException... or does it? 2: if (OnMessageReceived != null) 3: { 4: OnMessageReceived(this, new MessageReceivedEventArgs(aMessage)); 5: } The above code seems to handle the null reference if no one is subscribed, but there’s a problem if this is being used in multi-threaded environments.  For example, assume we have thread A which is about to raise the event, and it checks and clears the null check and is about to raise the event.  However, before it can do that thread B unsubscribes to the event, which sets the delegate to null.  Now, when thread A attempts to raise the event, this causes the NullReferenceException that we were hoping to avoid! To counter this, the simplest best-practice method is to copy the event (just a multicast delegate) to a temporary local variable just before we raise it.  Since we are inside the class where this event is being raised, we can copy it to a local variable like this, and it will protect us from multi-threading since multicast delegates are immutable and assignments are atomic: 1: // always make copy of the event multi-cast delegate before checking 2: // for null to avoid race-condition between the null-check and raising it. 3: var handler = OnMessageReceived; 4: 5: if (handler != null) 6: { 7: handler(this, new MessageReceivedEventArgs(aMessage)); 8: } The very slight trade-off is that it’s possible a class may get an event after it unsubscribes in a multi-threaded environment, but this is a small risk and classes should be prepared for this possibility anyway.  For a more detailed discussion on this, check out this excellent Eric Lippert blog post on Events and Races. Summary Generic delegates give us a lot of power to make generic algorithms and classes, and the EventHandler delegate family gives us the flexibility to create events easily, without needing to redefine delegates over and over.  Use them whenever you need to define events with or without specialized EventArgs.   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Delegates, EventHandler

    Read the article

  • How to mount a blu-ray drive?

    - by Stephan Schielke
    Maybe it is for the best to close this question. This has nothing to do with a bluray drive in general anymore. Probably a hardware defect. I will try to test it with a windows system and different cables again... Thx so far. I have a bluray/dvd/cdrom drive with SATA. Ubuntu wont find it under /dev/sd wodim --devices wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/sg2' rwrw-- : 'HL-DT-ST' 'BDDVDRW CH08LS10' ------------------------------------------------------------------------- cdrecord -scanbus scsibus2: 2,0,0 200) 'HL-DT-ST' 'BDDVDRW CH08LS10' '2.00' Removable CD-ROM fdisk dont even lists it. Ubuntu only automounts blank DVDs, but neither CDROM nor Blurays. I also changed the sata slot, sata cable and the power cable. The drive works with a windows system. This happens when I try to mount: sudo mount -t auto /dev/scd0 /media/bluray mount: you must specify the filesystem type I tried all filesystems there are. I also installed makemkv. It finds the drive but not the disc. Here is my /dev ls -al /dev total 12 drwxr-xr-x 17 root root 4420 2011-11-25 19:36 . drwxr-xr-x 28 root root 4096 2011-11-25 07:12 .. crw------- 1 root root 10, 235 2011-11-25 19:28 autofs -rw-r--r-- 1 root root 630 2011-11-25 19:28 .blkid.tab -rw-r--r-- 1 root root 630 2011-11-25 19:28 .blkid.tab.old drwxr-xr-x 2 root root 700 2011-11-25 19:27 block drwxr-xr-x 2 root root 100 2011-11-25 19:27 bsg crw------- 1 root root 10, 234 2011-11-25 19:28 btrfs-control drwxr-xr-x 3 root root 60 2011-11-25 19:27 bus drwxr-xr-x 2 root root 3820 2011-11-25 19:28 char crw------- 1 root root 5, 1 2011-11-25 19:28 console lrwxrwxrwx 1 root root 11 2011-11-25 19:28 core -> /proc/kcore drwxr-xr-x 2 root root 60 2011-11-25 19:28 cpu crw------- 1 root root 10, 60 2011-11-25 19:28 cpu_dma_latency drwxr-xr-x 7 root root 140 2011-11-25 19:27 disk crw------- 1 root root 10, 61 2011-11-25 19:28 ecryptfs crw-rw---- 1 root video 29, 0 2011-11-25 19:28 fb0 lrwxrwxrwx 1 root root 13 2011-11-25 19:28 fd -> /proc/self/fd crw-rw-rw- 1 root root 1, 7 2011-11-25 19:28 full crw-rw-rw- 1 root fuse 10, 229 2011-11-25 19:28 fuse crw------- 1 root root 251, 0 2011-11-25 19:28 hidraw0 crw------- 1 root root 251, 1 2011-11-25 19:28 hidraw1 crw------- 1 root root 10, 228 2011-11-25 19:28 hpet lrwxrwxrwx 1 root root 14 2011-11-25 19:27 .initramfs -> /run/initramfs drwxr-xr-x 4 root root 220 2011-11-25 19:28 input crw------- 1 root root 1, 11 2011-11-25 19:28 kmsg srw-rw-rw- 1 root root 0 2011-11-25 19:28 log brw-rw---- 1 root disk 7, 0 2011-11-25 19:28 loop0 brw-rw---- 1 root disk 7, 1 2011-11-25 19:28 loop1 brw-rw---- 1 root disk 7, 2 2011-11-25 19:28 loop2 brw-rw---- 1 root disk 7, 3 2011-11-25 19:28 loop3 brw-rw---- 1 root disk 7, 4 2011-11-25 19:28 loop4 brw-rw---- 1 root disk 7, 5 2011-11-25 19:28 loop5 brw-rw---- 1 root disk 7, 6 2011-11-25 19:28 loop6 brw-rw---- 1 root disk 7, 7 2011-11-25 19:28 loop7 drwxr-xr-x 2 root root 60 2011-11-25 19:27 mapper crw------- 1 root root 10, 227 2011-11-25 19:28 mcelog crw-r----- 1 root kmem 1, 1 2011-11-25 19:28 mem drwxr-xr-x 2 root root 60 2011-11-25 19:27 net crw------- 1 root root 10, 59 2011-11-25 19:28 network_latency crw------- 1 root root 10, 58 2011-11-25 19:28 network_throughput crw-rw-rw- 1 root root 1, 3 2011-11-25 19:28 null crw-rw-rw- 1 root root 195, 0 2011-11-25 19:28 nvidia0 crw-rw-rw- 1 root root 195, 255 2011-11-25 19:28 nvidiactl crw------- 1 root root 1, 12 2011-11-25 19:28 oldmem crw-r----- 1 root kmem 1, 4 2011-11-25 19:28 port crw------- 1 root root 108, 0 2011-11-25 19:28 ppp crw------- 1 root root 10, 1 2011-11-25 19:28 psaux crw-rw-rw- 1 root tty 5, 2 2011-11-25 20:00 ptmx drwxr-xr-x 2 root root 0 2011-11-25 19:27 pts brw-rw---- 1 root disk 1, 0 2011-11-25 19:28 ram0 brw-rw---- 1 root disk 1, 1 2011-11-25 19:28 ram1 brw-rw---- 1 root disk 1, 10 2011-11-25 19:28 ram10 brw-rw---- 1 root disk 1, 11 2011-11-25 19:28 ram11 brw-rw---- 1 root disk 1, 12 2011-11-25 19:28 ram12 brw-rw---- 1 root disk 1, 13 2011-11-25 19:28 ram13 brw-rw---- 1 root disk 1, 14 2011-11-25 19:28 ram14 brw-rw---- 1 root disk 1, 15 2011-11-25 19:28 ram15 brw-rw---- 1 root disk 1, 2 2011-11-25 19:28 ram2 brw-rw---- 1 root disk 1, 3 2011-11-25 19:28 ram3 brw-rw---- 1 root disk 1, 4 2011-11-25 19:28 ram4 brw-rw---- 1 root disk 1, 5 2011-11-25 19:28 ram5 brw-rw---- 1 root disk 1, 6 2011-11-25 19:28 ram6 brw-rw---- 1 root disk 1, 7 2011-11-25 19:28 ram7 brw-rw---- 1 root disk 1, 8 2011-11-25 19:28 ram8 brw-rw---- 1 root disk 1, 9 2011-11-25 19:28 ram9 crw-rw-rw- 1 root root 1, 8 2011-11-25 19:28 random crw-rw-r--+ 1 root root 10, 62 2011-11-25 19:28 rfkill lrwxrwxrwx 1 root root 4 2011-11-25 19:28 rtc -> rtc0 crw------- 1 root root 254, 0 2011-11-25 19:28 rtc0 lrwxrwxrwx 1 root root 3 2011-11-25 19:38 scd0 -> sr0 brw-rw---- 1 root disk 8, 0 2011-11-25 19:28 sda brw-rw---- 1 root disk 8, 1 2011-11-25 19:28 sda1 brw-rw---- 1 root disk 8, 2 2011-11-25 19:28 sda2 brw-rw---- 1 root disk 8, 3 2011-11-25 19:28 sda3 brw-rw---- 1 root disk 8, 5 2011-11-25 19:28 sda5 brw-rw---- 1 root disk 8, 6 2011-11-25 19:28 sda6 brw-rw---- 1 root disk 8, 16 2011-11-25 19:28 sdb brw-rw---- 1 root disk 8, 17 2011-11-25 19:28 sdb1 crw-rw---- 1 root disk 21, 0 2011-11-25 19:28 sg0 crw-rw---- 1 root disk 21, 1 2011-11-25 19:28 sg1 crw-rw----+ 1 root cdrom 21, 2 2011-11-25 19:28 sg2 lrwxrwxrwx 1 root root 8 2011-11-25 19:28 shm -> /run/shm crw------- 1 root root 10, 231 2011-11-25 19:28 snapshot drwxr-xr-x 4 root root 280 2011-11-25 19:28 snd brw-rw----+ 1 root cdrom 11, 0 2011-11-25 19:38 sr0 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stderr -> /proc/self/fd/2 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stdin -> /proc/self/fd/0 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stdout -> /proc/self/fd/1 crw-rw-rw- 1 root tty 5, 0 2011-11-25 19:35 tty crw--w---- 1 root tty 4, 0 2011-11-25 19:28 tty0 crw------- 1 root root 4, 1 2011-11-25 19:28 tty1 crw--w---- 1 root tty 4, 10 2011-11-25 19:28 tty10 crw--w---- 1 root tty 4, 11 2011-11-25 19:28 tty11 crw--w---- 1 root tty 4, 12 2011-11-25 19:28 tty12 crw--w---- 1 root tty 4, 13 2011-11-25 19:28 tty13 crw--w---- 1 root tty 4, 14 2011-11-25 19:28 tty14 crw--w---- 1 root tty 4, 15 2011-11-25 19:28 tty15 crw--w---- 1 root tty 4, 16 2011-11-25 19:28 tty16 crw--w---- 1 root tty 4, 17 2011-11-25 19:28 tty17 crw--w---- 1 root tty 4, 18 2011-11-25 19:28 tty18 crw--w---- 1 root tty 4, 19 2011-11-25 19:28 tty19 crw------- 1 root root 4, 2 2011-11-25 19:28 tty2 crw--w---- 1 root tty 4, 20 2011-11-25 19:28 tty20 crw--w---- 1 root tty 4, 21 2011-11-25 19:28 tty21 crw--w---- 1 root tty 4, 22 2011-11-25 19:28 tty22 crw--w---- 1 root tty 4, 23 2011-11-25 19:28 tty23 crw--w---- 1 root tty 4, 24 2011-11-25 19:28 tty24 crw--w---- 1 root tty 4, 25 2011-11-25 19:28 tty25 crw--w---- 1 root tty 4, 26 2011-11-25 19:28 tty26 crw--w---- 1 root tty 4, 27 2011-11-25 19:28 tty27 crw--w---- 1 root tty 4, 28 2011-11-25 19:28 tty28 crw--w---- 1 root tty 4, 29 2011-11-25 19:28 tty29 crw------- 1 root root 4, 3 2011-11-25 19:28 tty3 crw--w---- 1 root tty 4, 30 2011-11-25 19:28 tty30 crw--w---- 1 root tty 4, 31 2011-11-25 19:28 tty31 crw--w---- 1 root tty 4, 32 2011-11-25 19:28 tty32 crw--w---- 1 root tty 4, 33 2011-11-25 19:28 tty33 crw--w---- 1 root tty 4, 34 2011-11-25 19:28 tty34 crw--w---- 1 root tty 4, 35 2011-11-25 19:28 tty35 crw--w---- 1 root tty 4, 36 2011-11-25 19:28 tty36 crw--w---- 1 root tty 4, 37 2011-11-25 19:28 tty37 crw--w---- 1 root tty 4, 38 2011-11-25 19:28 tty38 crw--w---- 1 root tty 4, 39 2011-11-25 19:28 tty39 crw------- 1 root root 4, 4 2011-11-25 19:28 tty4 crw--w---- 1 root tty 4, 40 2011-11-25 19:28 tty40 crw--w---- 1 root tty 4, 41 2011-11-25 19:28 tty41 crw--w---- 1 root tty 4, 42 2011-11-25 19:28 tty42 crw--w---- 1 root tty 4, 43 2011-11-25 19:28 tty43 crw--w---- 1 root tty 4, 44 2011-11-25 19:28 tty44 crw--w---- 1 root tty 4, 45 2011-11-25 19:28 tty45 crw--w---- 1 root tty 4, 46 2011-11-25 19:28 tty46 crw--w---- 1 root tty 4, 47 2011-11-25 19:28 tty47 crw--w---- 1 root tty 4, 48 2011-11-25 19:28 tty48 crw--w---- 1 root tty 4, 49 2011-11-25 19:28 tty49 crw------- 1 root root 4, 5 2011-11-25 19:28 tty5 crw--w---- 1 root tty 4, 50 2011-11-25 19:28 tty50 crw--w---- 1 root tty 4, 51 2011-11-25 19:28 tty51 crw--w---- 1 root tty 4, 52 2011-11-25 19:28 tty52 crw--w---- 1 root tty 4, 53 2011-11-25 19:28 tty53 crw--w---- 1 root tty 4, 54 2011-11-25 19:28 tty54 crw--w---- 1 root tty 4, 55 2011-11-25 19:28 tty55 crw--w---- 1 root tty 4, 56 2011-11-25 19:28 tty56 crw--w---- 1 root tty 4, 57 2011-11-25 19:28 tty57 crw--w---- 1 root tty 4, 58 2011-11-25 19:28 tty58 crw--w---- 1 root tty 4, 59 2011-11-25 19:28 tty59 crw------- 1 root root 4, 6 2011-11-25 19:28 tty6 crw--w---- 1 root tty 4, 60 2011-11-25 19:28 tty60 crw--w---- 1 root tty 4, 61 2011-11-25 19:28 tty61 crw--w---- 1 root tty 4, 62 2011-11-25 19:28 tty62 crw--w---- 1 root tty 4, 63 2011-11-25 19:28 tty63 crw--w---- 1 root tty 4, 7 2011-11-25 19:28 tty7 crw--w---- 1 root tty 4, 8 2011-11-25 19:28 tty8 crw--w---- 1 root tty 4, 9 2011-11-25 19:28 tty9 crw------- 1 root root 5, 3 2011-11-25 19:28 ttyprintk crw-rw---- 1 root dialout 4, 64 2011-11-25 19:28 ttyS0 crw-rw---- 1 root dialout 4, 65 2011-11-25 19:28 ttyS1 crw-rw---- 1 root dialout 4, 74 2011-11-25 19:28 ttyS10 crw-rw---- 1 root dialout 4, 75 2011-11-25 19:28 ttyS11 crw-rw---- 1 root dialout 4, 76 2011-11-25 19:28 ttyS12 crw-rw---- 1 root dialout 4, 77 2011-11-25 19:28 ttyS13 crw-rw---- 1 root dialout 4, 78 2011-11-25 19:28 ttyS14 crw-rw---- 1 root dialout 4, 79 2011-11-25 19:28 ttyS15 crw-rw---- 1 root dialout 4, 80 2011-11-25 19:28 ttyS16 crw-rw---- 1 root dialout 4, 81 2011-11-25 19:28 ttyS17 crw-rw---- 1 root dialout 4, 82 2011-11-25 19:28 ttyS18 crw-rw---- 1 root dialout 4, 83 2011-11-25 19:28 ttyS19 crw-rw---- 1 root dialout 4, 66 2011-11-25 19:28 ttyS2 crw-rw---- 1 root dialout 4, 84 2011-11-25 19:28 ttyS20 crw-rw---- 1 root dialout 4, 85 2011-11-25 19:28 ttyS21 crw-rw---- 1 root dialout 4, 86 2011-11-25 19:28 ttyS22 crw-rw---- 1 root dialout 4, 87 2011-11-25 19:28 ttyS23 crw-rw---- 1 root dialout 4, 88 2011-11-25 19:28 ttyS24 crw-rw---- 1 root dialout 4, 89 2011-11-25 19:28 ttyS25 crw-rw---- 1 root dialout 4, 90 2011-11-25 19:28 ttyS26 crw-rw---- 1 root dialout 4, 91 2011-11-25 19:28 ttyS27 crw-rw---- 1 root dialout 4, 92 2011-11-25 19:28 ttyS28 crw-rw---- 1 root dialout 4, 93 2011-11-25 19:28 ttyS29 crw-rw---- 1 root dialout 4, 67 2011-11-25 19:28 ttyS3 crw-rw---- 1 root dialout 4, 94 2011-11-25 19:28 ttyS30 crw-rw---- 1 root dialout 4, 95 2011-11-25 19:28 ttyS31 crw-rw---- 1 root dialout 4, 68 2011-11-25 19:28 ttyS4 crw-rw---- 1 root dialout 4, 69 2011-11-25 19:28 ttyS5 crw-rw---- 1 root dialout 4, 70 2011-11-25 19:28 ttyS6 crw-rw---- 1 root dialout 4, 71 2011-11-25 19:28 ttyS7 crw-rw---- 1 root dialout 4, 72 2011-11-25 19:28 ttyS8 crw-rw---- 1 root dialout 4, 73 2011-11-25 19:28 ttyS9 d rwxr-xr-x 3 root root 60 2011-11-25 19:28 .udev crw-r----- 1 root root 10, 223 2011-11-25 19:28 uinput crw-rw-rw- 1 root root 1, 9 2011-11-25 19:28 urandom drwxr-xr-x 2 root root 60 2011-11-25 19:27 usb crw------- 1 root root 252, 0 2011-11-25 19:28 usbmon0 crw------- 1 root root 252, 1 2011-11-25 19:28 usbmon1 crw------- 1 root root 252, 2 2011-11-25 19:28 usbmon2 crw------- 1 root root 252, 3 2011-11-25 19:28 usbmon3 crw------- 1 root root 252, 4 2011-11-25 19:28 usbmon4 crw------- 1 root root 252, 5 2011-11-25 19:28 usbmon5 crw------- 1 root root 252, 6 2011-11-25 19:28 usbmon6 crw------- 1 root root 252, 7 2011-11-25 19:28 usbmon7 crw------- 1 root root 252, 8 2011-11-25 19:28 usbmon8 drwxr-xr-x 4 root root 80 2011-11-25 19:28 v4l crw------- 1 root root 10, 57 2011-11-25 19:28 vboxdrv crw------- 1 root root 10, 56 2011-11-25 19:28 vboxnetctl drwxr-x--- 4 root vboxusers 80 2011-11-25 19:28 vboxusb crw-rw---- 1 root tty 7, 0 2011-11-25 19:28 vcs crw-rw---- 1 root tty 7, 1 2011-11-25 19:28 vcs1 crw-rw---- 1 root tty 7, 2 2011-11-25 19:28 vcs2 crw-rw---- 1 root tty 7, 3 2011-11-25 19:28 vcs3 crw-rw---- 1 root tty 7, 4 2011-11-25 19:28 vcs4 crw-rw---- 1 root tty 7, 5 2011-11-25 19:28 vcs5 crw-rw---- 1 root tty 7, 6 2011-11-25 19:28 vcs6 crw-rw---- 1 root tty 7, 128 2011-11-25 19:28 vcsa crw-rw---- 1 root tty 7, 129 2011-11-25 19:28 vcsa1 crw-rw---- 1 root tty 7, 130 2011-11-25 19:28 vcsa2 crw-rw---- 1 root tty 7, 131 2011-11-25 19:28 vcsa3 crw-rw---- 1 root tty 7, 132 2011-11-25 19:28 vcsa4 crw-rw---- 1 root tty 7, 133 2011-11-25 19:28 vcsa5 crw-rw---- 1 root tty 7, 134 2011-11-25 19:28 vcsa6 crw------- 1 root root 10, 63 2011-11-25 19:28 vga_arbiter crw-rw----+ 1 root video 81, 0 2011-11-25 19:28 video0 crw-rw-rw- 1 root root 1, 5 2011-11-25 19:28 zero sg_scan -i gives me: sudo sg_scan -i /dev/sg0: scsi0 channel=0 id=0 lun=0 [em] ATA ST31000524NS SN12 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg1: scsi0 channel=0 id=1 lun=0 [em] ATA WDC WD15EADS-00S 01.0 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg2: scsi2 channel=0 id=0 lun=0 [em] HL-DT-ST BDDVDRW CH08LS10 2.00 [rmb=1 cmdq=0 pqual=0 pdev=0x5] sg_map gives me: /dev/sg0 /dev/sda /dev/sg1 /dev/sdb /dev/sg2 /dev/scd0 lsscsi -l gives me: [0:0:0:0] disk ATA ST31000524NS SN12 /dev/sda state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 [0:0:1:0] disk ATA WDC WD15EADS-00S 01.0 /dev/sdb state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 [2:0:0:0] cd/dvd HL-DT-ST BDDVDRW CH08LS10 2.00 /dev/sr0 state=running queue_depth=1 scsi_level=6 type=5 device_blocked=0 timeout=30 my udf mod is: filename: /lib/modules/3.0.0-14-generic/kernel/fs/udf/udf.ko license: GPL description: Universal Disk Format Filesystem author: Ben Fennema srcversion: 6ABDE012374D96B9685B8E5 depends: crc-itu-t vermagic: 3.0.0-14-generic SMP mod_unload modversions Do I need special drivers or mods enabled? Do I need to change some BIOS settings? edit: Somehow I am now able to fire the mount command without any filesystem errors, but now I get: mount: no medium found on /dev/sr0

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • A Closable jQuery Plug-in

    - by Rick Strahl
    In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out. Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout. /* * * Closable * * Makes selected DOM elements closable by making them * invisible when close icon is clicked * * Version 1.01 * @requires jQuery v1.3 or later * * Copyright (c) 2007-2010 Rick Strahl * http://www.west-wind.com/ * * Licensed under the MIT license: * http://www.opensource.org/licenses/mit-license.php Support CSS: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Options: * handle Element to place closebox into (like say a header). Use if main element and closebox container are two different elements. * closeHandler Function called when the close box is clicked. Return true to close the box return false to keep it visible. * cssClass The CSS class to apply to the close box DIV or IMG tag. * imageUrl Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling. * fadeOut Optional provide fadeOut speed. Default no fade out occurs */ (function ($) { $.fn.closable = function (options) { var opt = { handle: null, closeHandler: null, cssClass: "closebox", imageUrl: null, fadeOut: null }; $.extend(opt, options); return this.each(function (i) { var el = $(this); var pos = el.css("position"); if (!pos || pos == "static") el.css("position", "relative"); var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el; var div = opt.imageUrl ? $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") : $("<div>"); div.addClass(opt.cssClass) .click(function (e) { if (opt.closeHandler) if (!opt.closeHandler.call(this, e)) return; if (opt.fadeOut) $(el).fadeOut(opt.fadeOut); else $(el).hide(); }); if (opt.imageUrl) div.css("background-image", "none"); h.append(div); }); } })(jQuery); The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this: you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag: $().ready(function() { $('#ctl00_MainContent_panEditBook') .closable({ handle: $('#divEditBook_Header') }) .draggable({ dragDelay: 100, handle: '#divEditBook_Header' }) .shadow({ opacity: 0.25, offset: 6 }); }) The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file: .closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint', handle: $('#divEditBook_Header')}) so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel. More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this: function loadBooks() { showProgress(); // Clear the content $("#divBookListWrapper").empty(); var filter = $("#" + scriptVars.lstFiltersId).val(); Proxy.GetBooks(filter, function(books) { $(books).each(function(i) { updateBook(this); showProgress(true); }); }, onPageError); } function updateBook(book,highlight) { // try to retrieve the single item in the list by tag attribute id var item = $(".bookitem[tag=" +book.Pk +"]"); // grab and evaluate the template var html = parseTemplate(template, book); var newItem = $(html) .attr("tag", book.Pk.toString()) .click(function() { var pk = $(this).attr("tag"); editBook(this, parseInt(pk)); }) .closable({ closeHandler: function(e) { removeBook(this, e); }, imageUrl: "../../images/remove.gif" }); if (item.length > 0) item.after(newItem).remove(); else newItem.appendTo($("#divBookListWrapper")); if (highlight) { newItem .addClass("pulse") .effect("bounce", { distance: 15, times: 3 }, 400); setTimeout(function() { newItem.removeClass("pulse"); }, 1200); } } Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image. This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window. You can find more information about the .closable class behavior and options here: .closable Documentation Plug-ins make Server Control JavaScript much easier I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior: if (this.Closable && !string.IsNullOrEmpty(DragHandleID) ) { string imageUrl = this.CloseBoxImage; if (imageUrl == "WebResource" ) imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE); StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'"); if (!string.IsNullOrEmpty(this.DragHandleID)) closableOptions.Append(",handle: $('#" + this.DragHandleID + "')"); if (!string.IsNullOrEmpty(this.ClientDialogHandler)) closableOptions.Append(",handler: " + this.ClientDialogHandler); if (this.FadeOnClose) closableOptions.Append(",fadeOut: 'slow'"); startupScript.Append(@" .closable({ " + closableOptions + "})"); } The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least. I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control. Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have. Related Links Download jquery.closable West Wind Web Toolkit jQuery Plug-ins © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery   ASP.NET  JavaScript  

    Read the article

  • The future for Microsoft

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/16/the-future-for-microsoft.aspxMicrosoft is in the process of reinventing itself. While some may argue that it’s “too little, too late” or that their growing consumer-focused strategy is wrong, the truth of the situation is that Microsoft is reinventing itself into a new company. While Microsoft is now calling themselves a “devices and services” company, that’s not entirely accurate. Let’s look at some facts: Microsoft will always (for the long-term foreseeable future) be financially split into the following divisions: Windows/Operating Systems, which for FY13 made up approximately 24% of overall revenue. Server and Tools, which for FY13 made up approximately 26% of overall revenue. Enterprise/Business Products, which for FY13 made up approximately 32% of overall revenue. Entertainment and Devices, which for FY13 made up approximately 13% of overall revenue. Online Services, which for FY13 made up approximately 4% of overall revenue. It is important to realize that hardware products like the Surface fall under the Windows/Operating Systems division while products like the Xbox 360 fall under the Entertainment and Devices division. (Presumably other hardware, such as mice, keyboards, and cameras, also fall under the Entertainment and Devices division.) It’s also unclear where Microsoft’s recent acquisition of Nokia’s handset division will fall, but let’s assume that it will be under Entertainment and Devices as well. Now, for the sake of argument, let’s assume a slightly different structure that I think is more in line with how Microsoft presents itself and how the general public sees it: Consumer Products and Devices, which would probably make up approximately 9% of overall revenue. Developer Tools, which would probably make up approximately 13% of overall revenue. Enterprise Products and Devices, which would probably make up approximately 47% of overall revenue. Entertainment, which would probably make up approximately 13% of overall revenue. Online Services, which would probably make up approximately 17% of overall revenue. (Just so we’re clear, in this structure hardware products like the Surface, a portion of Windows sales, and other hardware fall under the Consumer Products and Devices division. I’m assuming that more of the income for the Windows division is coming from enterprise/volume licenses so 15% of that income went to the Enterprise Products and Devices division. Most of the enterprise services, like Azure, fall under the Online Services division so half of the Server and Tools income went there as well.) No matter how you look at it, the bulk of Microsoft’s income still comes from not just the enterprise but also software sales, and this really shouldn’t surprise anyone. So, now that the stage is set…what’s the future for Microsoft? The future I see for Microsoft (again, this is just my prediction based on my own instinct, gut-feel and publicly available information) is this: Microsoft is becoming a consumer-focused enterprise company. Let’s look at it a different way. Microsoft is an enterprise-focused company trying to create a larger consumer presence.  To a large extent, this is the exact opposite of Apple, who is really a consumer-focused company trying to create a larger enterprise presence. The major reason consumer-focused companies (like Apple) have started making in-roads into the enterprise is the “bring your own device” phenomenon. Yes, Apple has created some “game-changing” products but their enterprise influence is still relatively small. Unfortunately (for this blog post at least), Apple provides revenue in terms of hardware products rather than business divisions, so it’s not possible to do a direct comparison. However, in the interest of transparency, from Apple’s Quarterly Report (filed 24 July 2013), their revenue breakdown is: iPhone, which for the 3 months ending 29 June 2013 made up approximately 51% of revenue. iPad, which for the 3 months ending 29 June 2013 made up approximately 18% of revenue. Mac, which for the 3 months ending 29 June 2013 made up approximately 14% of revenue. iPod, which for the 3 months ending 29 June 2013 made up approximately 2% of revenue. iTunes, Software, and Services, which for the 3 months ending 29 June 2013 made up approximately 11% of revenue. Accessories, which for the 3 months ending 29 July 2013 made up approximately 3% of revenue. From this, it’s pretty clear that Apple is a consumer-and-hardware-focused company. At this point, you may be asking yourself “Where is all of this going?” The answer to that lies in Microsoft’s shift in company focus. They are becoming more consumer focused, but what exactly does that mean? The biggest change (at least that’s been in the news lately) is the pending purchase of Nokia’s handset division. This, in combination with their Surface line of tablets and the Xbox, will put Microsoft squarely in the realm of a hardware-focused company in addition to being a software-focused company. That can (and most likely will) shift the revenue split to looking at revenue based on software sales (both consumer and enterprise) and also hardware sales (mostly on the consumer side). If we look at things strictly from a Windows perspective, Microsoft clearly has a lot of irons in the fire at the moment. Discounting the various product SKUs available and painting the picture with broader strokes, there are currently 5 different Windows-based operating systems: Windows Phone Windows Phone 7.x, which runs on top of the Windows CE kernel Windows Phone 8.x+, which runs on top of the Windows 8 kernel Windows RT The ARM-based version of Windows 8, which runs on top of the Windows 8 kernel Windows (Pro) The Intel-based version of Windows 8, which runs on top of the Windows 8 kernel Xbox The Xbox 360, which runs it’s own proprietary OS. The Xbox One, which runs it’s own proprietary OS, a version of Windows running on top of the Windows 8 kernel and a proprietary “manager” OS which manages the other two. Over time, Windows Phone 7.x devices will fade so that really leaves 4 different versions. Looking at Windows RT and Windows Phone 8.x paints an interesting story. Right now, all mobile phone devices run on some sort of ARM chip and that doesn’t look like it will change any time soon. That means Microsoft has two different Windows based operating systems for the ARM platform. Long term, it doesn’t make sense for Microsoft to continue supporting that arrangement. I have long suspected (since the Surface was first announced) that Microsoft will unify these two variants of Windows and recent speculation from some of the leading Microsoft watchers lends credence to this suspicion. It is rumored that upcoming Windows Phone releases will include support for larger screen sizes, relax the requirement to have a hardware-based back button and will continue to improve API parity between Windows Phone and Windows RT. At the same time, Windows RT will include support for smaller screen sizes. Since both of these operating systems are based on the same core Windows kernel, it makes sense (both from a financial and development resource perspective) for Microsoft to unify them. The user interfaces are already very similar. So similar in fact, that visually it’s difficult to tell them apart. To illustrate this, here are two screen captures: Other than a few variations (the Bing News app, the picture shown in the Pictures tile and the spacing between the tiles) these are identical. The one on the left is from my Windows 8.1 laptop (which looks the same as on my Surface RT) and the one on the right is from my Windows Phone 8 Lumia 925. This pretty clearly shows that from a consumer perspective, there really is no practical difference between how these two operating systems look and how you interact with them. For the consumer, your entertainment device (Xbox One), phone (Windows Phone) and mobile computing device (Surface [or some other vendors tablet], laptop, netbook or ultrabook) and your desktop computing device (desktop) will all look and feel the same. While many people will denounce this consistency of user experience, I think this will be a good thing in the long term, especially for the upcoming generations. For example, my 5-year old son knows how to use my tablet, phone and Xbox because they all feature nearly identical user experiences. When Windows 8 was released, Microsoft allowed a Windows Store app to be purchased once and installed on as many as 5 devices. With Windows 8.1, this limit has been increased to over 50. Why is that important? If you consider that your phone, computing devices, and entertainment device will be running the same operating system (with minor differences related to physical hardware chipset), that means that I could potentially purchase my sons favorite Angry Birds game once and be able to install it on all of the devices I own. (And for those of you wondering, it’s only 7 [at the moment].) From an app developer perspective, the story becomes even more compelling. Right now there are differences between the different operating systems, but those differences are shrinking. The user interface technology for both is XAML but there are different controls available and different user experience concepts. Some of the APIs available are the same while some are not. You can’t develop a Windows Phone app that can also run on Windows (either Windows Pro or RT). With each release of Windows Phone and Windows RT, those difference become smaller and smaller. Add to this mix the Xbox One, which will also feature a Windows-based operating system and the same “modern” (tile-based) user interface and the visible distinctions between the operating systems will become even smaller. Unifying the operating systems means one set of APIs and one code base to maintain for an app that can run on multiple devices. One code base means it’s easier to add features and fix bugs and that those changes become available on all devices at the same time. It also means a single app store, which will increase the discoverability and reach of your app and consolidate revenue and app profile management. Now, the choice of what devices an app is available on becomes a simple checkbox decision rather than a technical limitation. Ultimately, this means more apps available to consumers, which is always good for the app ecosystem. Is all of this just rumor, speculation and conjecture? Of course, but it’s not unfounded. As I mentioned earlier, some of the prominent Microsoft watchers are also reporting similar rumors. However, Microsoft itself has even hinted at this future with their recent organizational changes and by telling developers “if you want to develop for Xbox One, start developing for Windows 8 now.” I think this pretty clearly paints the following picture: Microsoft is committed to the “modern” user interface paradigm. Microsoft is changing their release cadence (for all products, not just operating systems) to be faster and more modular. Microsoft is going to continue to unify their OS platforms both from a consumer perspective and a developer perspective. While this direction will certainly concern some people it will excite many others. Microsoft’s biggest failing has always been following through with a strong and sustained marketing strategy that presents a consistent view point and highlights what this unified and connected experience looks like and how it benefits consumers and enterprises. We’ve started to see some of this over the last few years, but it needs to continue and become more aggressive and consistent. In the long run, I think Microsoft will be able to pull all of these technologies and devices together into one seamless ecosystem. It isn’t going to happen overnight, but my prediction is that we will be there by the end of 2016. As both a consumer and a developer, I, for one, am excited about the future of Microsoft.

    Read the article

  • NHibernate Conventions

    - by Ricardo Peres
    Introduction It seems that nowadays everyone loves conventions! Not the ones that you go to, but the ones that you use, that is! It just happens that NHibernate also supports conventions, and we’ll see exactly how. Conventions in NHibernate are supported in two ways: Naming of tables and columns when not explicitly indicated in the mappings; Full domain mapping. Naming of Tables and Columns Since always NHibernate has supported the concept of a naming strategy. A naming strategy in NHibernate converts class and property names to table and column names and vice-versa, when a name is not explicitly supplied. In concrete, it must be a realization of the NHibernate.Cfg.INamingStrategy interface, of which NHibernate includes two implementations: DefaultNamingStrategy: the default implementation, where each column and table are mapped to identically named properties and classes, for example, “MyEntity” will translate to “MyEntity”; ImprovedNamingStrategy: underscores (_) are used to separate Pascal-cased fragments, for example, entity “MyEntity” will be mapped to a “my_entity” table. The naming strategy can be defined at configuration level (the Configuration instance) by calling the SetNamingStrategy method: 1: cfg.SetNamingStrategy(ImprovedNamingStrategy.Instance); Both the DefaultNamingStrategy and the ImprovedNamingStrategy classes offer singleton instances in the form of Instance static fields. DefaultNamingStrategy is the one NHibernate uses, if you don’t specify one. Domain Mapping In mapping by code, we have the choice of relying on conventions to do the mapping automatically. This means a class will inspect our classes and decide how they will relate to the database objects. The class that handles conventions is NHibernate.Mapping.ByCode.ConventionModelMapper, a specialization of the base by code mapper, NHibernate.Mapping.ByCode.ModelMapper. The ModelMapper relies on an internal SimpleModelInspector to help it decide what and how to map, but the mapper lets you override its decisions.  You apply code conventions like this: 1: //pick the types that you want to map 2: IEnumerable<Type> types = Assembly.GetExecutingAssembly().GetExportedTypes(); 3:  4: //conventions based mapper 5: ConventionModelMapper mapper = new ConventionModelMapper(); 6:  7: HbmMapping mapping = mapper.CompileMappingFor(types); 8:  9: //the one and only configuration instance 10: Configuration cfg = ...; 11: cfg.AddMapping(mapping); This is a very simple example, it lacks, at least, the id generation strategy, which you can add by adding an event handler like this: 1: mapper.BeforeMapClass += (IModelInspector modelInspector, Type type, IClassAttributesMapper classCustomizer) => 2: { 3: classCustomizer.Id(x => 4: { 5: //set the hilo generator 6: x.Generator(Generators.HighLow); 7: }); 8: }; The mapper will fire events like this whenever it needs to get information about what to do. And basically this is all it takes to automatically map your domain! It will correctly configure many-to-one and one-to-many relations, choosing bags or sets depending on your collections, will get the table and column names from the naming strategy we saw earlier and will apply the usual defaults to all properties, such as laziness and fetch mode. However, there is at least one thing missing: many-to-many relations. The conventional mapper doesn’t know how to find and configure them, which is a pity, but, alas, not difficult to overcome. To start, for my projects, I have this rule: each entity exposes a public property of type ISet<T> where T is, of course, the type of the other endpoint entity. Extensible as it is, NHibernate lets me implement this very easily: 1: mapper.IsOneToMany((MemberInfo member, Boolean isLikely) => 2: { 3: Type sourceType = member.DeclaringType; 4: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 5:  6: //check if the property is of a generic collection type 7: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 8: { 9: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 10:  11: //check if the type of the generic collection property is an entity 12: if (mapper.ModelInspector.IsEntity(destinationEntityType) == true) 13: { 14: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 15: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 16:  17: if (collectionInDestinationType != null) 18: { 19: return (false); 20: } 21: } 22: } 23:  24: return (true); 25: }); 26:  27: mapper.IsManyToMany((MemberInfo member, Boolean isLikely) => 28: { 29: //a relation is many to many if it isn't one to many 30: Boolean isOneToMany = mapper.ModelInspector.IsOneToMany(member); 31: return (!isOneToMany); 32: }); 33:  34: mapper.BeforeMapManyToMany += (IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) => 35: { 36: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 37: //set the mapping table column names from each source entity name plus the _Id sufix 38: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 39: }; 40:  41: mapper.BeforeMapSet += (IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) => 42: { 43: if (modelInspector.IsManyToMany(member.LocalMember) == true) 44: { 45: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 46:  47: Type sourceType = member.LocalMember.DeclaringType; 48: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 49: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 50:  51: //set inverse on the relation of the alphabetically first entity name 52: propertyCustomizer.Inverse(sourceType.Name == names.First()); 53: //set mapping table name from the entity names in alphabetical order 54: propertyCustomizer.Table(String.Join("_", names)); 55: } 56: }; We have to understand how the conventions mapper thinks: For each collection of entities found, it will ask the mapper if it is a one-to-many; in our case, if the collection is a generic one that has an entity as its generic parameter, and the generic parameter type has a similar collection, then it is not a one-to-many; Next, the mapper will ask if the collection that it now knows is not a one-to-many is a many-to-many; Before a set is mapped, if it corresponds to a many-to-many, we set its mapping table. Now, this is tricky: because we have no way to maintain state, we sort the names of the two endpoint entities and we combine them with a “_”; for the first alphabetical entity, we set its relation to inverse – remember, on a many-to-many relation, only one endpoint must be marked as inverse; finally, we set the column name as the name of the entity with an “_Id” suffix; Before the many-to-many relation is processed, we set the column name as the name of the other endpoint entity with the “_Id” suffix, as we did for the set. And that’s it. With these rules, NHibernate will now happily find and configure many-to-many relations, as well as all the others. You can wrap this in a new conventions mapper class, so that it is more easily reusable: 1: public class ManyToManyConventionModelMapper : ConventionModelMapper 2: { 3: public ManyToManyConventionModelMapper() 4: { 5: base.IsOneToMany((MemberInfo member, Boolean isLikely) => 6: { 7: return (this.IsOneToMany(member, isLikely)); 8: }); 9:  10: base.IsManyToMany((MemberInfo member, Boolean isLikely) => 11: { 12: return (this.IsManyToMany(member, isLikely)); 13: }); 14:  15: base.BeforeMapManyToMany += this.BeforeMapManyToMany; 16: base.BeforeMapSet += this.BeforeMapSet; 17: } 18:  19: protected virtual Boolean IsManyToMany(MemberInfo member, Boolean isLikely) 20: { 21: //a relation is many to many if it isn't one to many 22: Boolean isOneToMany = this.ModelInspector.IsOneToMany(member); 23: return (!isOneToMany); 24: } 25:  26: protected virtual Boolean IsOneToMany(MemberInfo member, Boolean isLikely) 27: { 28: Type sourceType = member.DeclaringType; 29: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 30:  31: //check if the property is of a generic collection type 32: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 33: { 34: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 35:  36: //check if the type of the generic collection property is an entity 37: if (this.ModelInspector.IsEntity(destinationEntityType) == true) 38: { 39: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 40: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 41:  42: if (collectionInDestinationType != null) 43: { 44: return (false); 45: } 46: } 47: } 48:  49: return (true); 50: } 51:  52: protected virtual new void BeforeMapManyToMany(IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) 53: { 54: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 55: //set the mapping table column names from each source entity name plus the _Id sufix 56: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 57: } 58:  59: protected virtual new void BeforeMapSet(IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) 60: { 61: if (modelInspector.IsManyToMany(member.LocalMember) == true) 62: { 63: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 64:  65: Type sourceType = member.LocalMember.DeclaringType; 66: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 67: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 68:  69: //set inverse on the relation of the alphabetically first entity name 70: propertyCustomizer.Inverse(sourceType.Name == names.First()); 71: //set mapping table name from the entity names in alphabetical order 72: propertyCustomizer.Table(String.Join("_", names)); 73: } 74: } 75: } Conclusion Of course, there is much more to mapping than this, I suggest you look at all the events and functions offered by the ModelMapper to see where you can hook for making it behave the way you want. If you need any help, just let me know!

    Read the article

  • C#: My World Clock

    - by Bruce Eitman
    [Placeholder:  I will post the entire project soon] I have been working on cleaning my office of 8 years of stuff from several engineers working on many projects.  It turns out that we have a few extra single board computers with displays, so at the end of the day last Friday I though why not create a little application to display the time, you know, a clock.  How difficult could that be?  It turns out that it is quite simple – until I decided to gold plate the project by adding time displays for our offices around the world. I decided to use C#, which actually made creating the main clock quite easy.   The application was simply a text box and a timer.  I set the timer to fire a couple of times a second, and when it does use a DateTime object to get the current time and retrieve a string to display. And I could have been done, but of course that gold plating came up.   Seems simple enough, simply offset the time from the local time to the location that I want the time for and display it.    Sure enough, I had the time displayed for UK, Italy, Kansas City, Japan and China in no time at all. But it is October, and for those of us still stuck with Daylight Savings Time, we know that the clocks are about to change.   My first attempt was to simply check to see if the local time was DST or Standard time, then change the offset for China.  China doesn’t have Daylight Savings Time. If you know anything about the time changes around the world, you already know that my plan is flawed – in a big way.   It turns out that the transitions in and out of DST take place at different times around the world.   If you didn’t know that, do a quick search for “Daylight Savings” and you will find many WEB sites dedicated to tracking the time changes dates, and times. Now the real challenge of this application; how do I programmatically find out when the time changes occur and handle them correctly?  After a considerable amount of research it turns out that the solution is to read the data from the registry and parse it to figure out when the time changes occur. Reading Time Change Information from the Registry Reading the data from the registry is simple, using the data is a little more complicated.  First, reading from the registry can be done like:             byte[] binarydata = (byte[])Registry.GetValue("HKEY_LOCAL_MACHINE\\Time Zones\\Eastern Standard Time", "TZI", null);   Where I have hardcoded the registry key for example purposes, but in the end I will use some variables.   We now have a binary blob with the data, but it needs to be converted to use the real data.   To start we will need a couple of structs to hold the data and make it usable.   We will need a SYSTEMTIME and REG_TZI_FORMAT.   You may have expected that we would need a TIME_ZONE_INFORMATION struct, but we don’t.   The data is stored in the registry as a REG_TZI_FORMAT, which excludes some of the values found in TIME_ZONE_INFORMATION.     struct SYSTEMTIME     {         internal short wYear;         internal short wMonth;         internal short wDayOfWeek;         internal short wDay;         internal short wHour;         internal short wMinute;         internal short wSecond;         internal short wMilliseconds;     }       struct REG_TZI_FORMAT     {         internal long Bias;         internal long StdBias;         internal long DSTBias;         internal SYSTEMTIME StandardStart;         internal SYSTEMTIME DSTStart;     }   Now we need to convert the binary blob to a REG_TZI_FORMAT.   To do that I created the following helper functions:         private void BinaryToSystemTime(ref SYSTEMTIME ST, byte[] binary, int offset)         {             ST.wYear = (short)(binary[offset + 0] + (binary[offset + 1] << 8));             ST.wMonth = (short)(binary[offset + 2] + (binary[offset + 3] << 8));             ST.wDayOfWeek = (short)(binary[offset + 4] + (binary[offset + 5] << 8));             ST.wDay = (short)(binary[offset + 6] + (binary[offset + 7] << 8));             ST.wHour = (short)(binary[offset + 8] + (binary[offset + 9] << 8));             ST.wMinute = (short)(binary[offset + 10] + (binary[offset + 11] << 8));             ST.wSecond = (short)(binary[offset + 12] + (binary[offset + 13] << 8));             ST.wMilliseconds = (short)(binary[offset + 14] + (binary[offset + 15] << 8));         }             private REG_TZI_FORMAT ConvertFromBinary(byte[] binarydata)         {             REG_TZI_FORMAT RTZ = new REG_TZI_FORMAT();               RTZ.Bias = binarydata[0] + (binarydata[1] << 8) + (binarydata[2] << 16) + (binarydata[3] << 24);             RTZ.StdBias = binarydata[4] + (binarydata[5] << 8) + (binarydata[6] << 16) + (binarydata[7] << 24);             RTZ.DSTBias = binarydata[8] + (binarydata[9] << 8) + (binarydata[10] << 16) + (binarydata[11] << 24);             BinaryToSystemTime(ref RTZ.StandardStart, binarydata, 4 + 4 + 4);             BinaryToSystemTime(ref RTZ.DSTStart, binarydata, 4 + 16 + 4 + 4);               return RTZ;         }   I am the first to admit that there may be a better way to get the settings from the registry and into the REG_TXI_FORMAT, but I am not a great C# programmer which I have said before on this blog.   So sometimes I chose brute force over elegant. Now that we have the Bias information and the start date information, we can start to make sense of it.   The bias is an offset, in minutes, from local time (if already in local time for the time zone in question) to get to UTC – or as Microsoft defines it: UTC = local time + bias.  Standard bias is an offset to adjust for standard time, which I think is usually zero.   And DST bias is and offset to adjust for daylight savings time. Since we don’t have the local time for a time zone other than the one that the computer is set to, what we first need to do is convert local time to UTC, which is simple enough using:                 DateTime.Now.ToUniversalTime(); Then, since we have UTC we need to do a little math to alter the formula to: local time = UTC – bias.  In other words, we need to subtract the bias minutes. I am ahead of myself though, the standard and DST start dates really aren’t dates.   Instead they indicate the month, day of week and week number of the time change.   The dDay member of SYSTEM time will be set to the week number of the date change indicating that the change happens on the first, second… day of week of the month.  So we need to convert them to dates so that we can determine which bias to use, and when to change to a different bias.   To do that, I wrote the following function:         private DateTime SystemTimeToDateTimeStart(SYSTEMTIME Time, int Year)         {             DayOfWeek[] Days = { DayOfWeek.Sunday, DayOfWeek.Monday, DayOfWeek.Tuesday, DayOfWeek.Wednesday, DayOfWeek.Thursday, DayOfWeek.Friday, DayOfWeek.Saturday };             DateTime InfoTime = new DateTime(Year, Time.wMonth, Time.wDay == 1 ? 1 : ((Time.wDay - 1) * 7) + 1, Time.wHour, Time.wMinute, Time.wSecond, DateTimeKind.Utc);             DateTime BestGuess = InfoTime;             while (BestGuess.DayOfWeek != Days[Time.wDayOfWeek])             {                 BestGuess = BestGuess.AddDays(1);             }             return BestGuess;         }   SystemTimeToDateTimeStart gets two parameters; a SYSTEMTIME and a year.   The reason is that we will try this year and next year because we are interested in start dates that are in the future, not the past.  The function starts by getting a new Datetime with the first possible date and then looking for the correct date. Using the start dates, we can then determine the correct bias to use, and the next date that time will change:             NextTimeChange = StandardChange;             CurrentBias = TimezoneSettings.Bias + TimezoneSettings.DSTBias;             if (DSTChange.Year != 1 && StandardChange.Year != 1)             {                 if (DSTChange.CompareTo(StandardChange) < 0)                 {                     NextTimeChange = DSTChange;                     CurrentBias = TimezoneSettings.StdBias + TimezoneSettings.Bias;                 }             }             else             {                 // I don't like this, but it turns out that China Standard Time                 // has a DSTBias of -60 on every Windows system that I tested.                 // So, if no DST transitions, then just use the Bias without                 // any offset                 CurrentBias = TimezoneSettings.Bias;             }   Note that some time zones do not change time, in which case the years will remain set to 1.   Further, I found that the registry settings are actually wrong in that the DST Bias is set to -60 for China even though there is not DST in China, so I ignore the standard and DST bias for those time zones. There is one thing that I have not solved, and don’t plan to solve.  If the time zone for this computer changes, this application will not update the clock using the new time zone.  I tell  you this because you may need to deal with it – I do not because I won’t let the user get to the control panel applet to change the timezone. Copyright © 2012 – Bruce Eitman All Rights Reserved

    Read the article

  • Surface RT: To Be Or Not To Be (Part 1)

    - by smehaffie
    So the Surface RT has been out for 9 months and Microsoft just declared a $900 million dollar write-down. So how did this happen and what does it mean for Microsoft’s efforts to break into the tablet market? I have been thinking a lot about most of the information below since the Surface product line was released. If you are looking for a “Microsoft Is Dead” story, then don’t read any further. But if you want an honest look at what I think led Microsoft to this point and what I think can be done to make Surface RT devices better, then please continue reading. What Led Microsoft To The $900 Million Write-Down Surface Unveiling:Microsoft totally missed the boat when they unveiled the Surface product line on June 18th, 2012. Microsoft should’ve been ready to post the specifications of both devices that night. Microsoft should’ve had a site up and running right after the event so people could pre-order the devices. This would have given them a good idea what the interest was in each device.  They could also have used this data to make a better estimate for the number of units to to have available for the launch and beyond.  They also lost out on taking advantage of the excitement generated by the Surface RT and Surface Pro announcement. They could have thrown in a free touch keyboard to anyone who pre-ordered. The advertising should have started right after the announcement and gotten bigger as launch day approached. Push for as many pre-order as possible and build excitement for the launch. Actual Launch (Surface RT): By this time all excitement was gone from the initial announcement, except for the Micorsoft faithful. Microsoft should have been ready to sell the Surface in as many markets as possible at launch. The limited market release was a real letdown for a lot of people.  A limited release right after the initial announce is understandable, but not at the official launch of the product. Microsoft overpriced the device and now they are lowering it to what it should have been to start with. The $349 price is within the range I suggested it should be at before pricing was announced. (Surface Tablets: The Price Must Be Right). Limited ordering options online was also a killer. User should have been able to buy the base unit of each device and then add on whatever keyboard they wanted to (this applies more to the Surface Pro).  There should have also been a place where users could order any additional add-ins that they wanted to buy (covers, extra power supplies, etc.) Marketing was better and the dancing “Click In” commercial was cool, but the ads comparing the iPad with Siri should have been on the air from day one of the announcement (or at least the launch).  Consumers want to know why you tablet is better, not just that is has a clickable keyboard and built-in kickstand. They could have also compared it to some of the other mid-range tablets if they had not overprices it to begin with. Stock Applications (Mail, People, Calendar, Music, Video, Reader and IE): This is where Microsoft really blew it. They had all the time in the world to make these applications the best of breed and instead we got applications that seemed thrown together.  Some updates have made these application better, but they are all still lacking in features that should have been there from day one. This did not help to enhance a new users experience any. ** I will admit that the applications that were data driven were first class citizen’s and that makes it even more perplexing why MS could knock it out of the park with the Weather, Travel, Finance, Bing, etc.) and fail so miserably on the core applications users would use the most on a tablet. Desktop on Tablet: The desktop just is so out of place on the tablet  I understand it was needed for Office but think it would have been better to not have the desktop in Windows RT, but instead open up the Office applications in full screen mode, in a desktop shell (same goes for  IE11).That way the user wouldn’t realize they are leaving Metro and going to the desktop. The other option would have been to just not include Office on Windows RT devices. Instead they could have made awesome Widows Store Apps for Word, Excel, OneNote and PowerPoint. In addition, they could have made the stock Mail, People, and Calendar applications contain all the functions that Outlook gives desktop users. Having some of the settings in desktop mode and others under “Change PC Settings” made Windows RT seemed unfinished and rushed to market. What Can Be Done To Make Windows RT Based Tablets Better (At least in my opinion) Either eliminate the desktop all together from Windows RT or at least make the user experience better by hiding the fact the user is running Office/IE in the desktop. Personally I ‘d like them to totally get rid of it and just make awesome Windows Store Application version of Word, Excel PowerPoint & OneNote.  This might also make the OS smaller and give the user more available disk space. I doubt there will ever be a Windows Store App versions of Office, but I still think it is a good idea. Make is so users can easily direct their documents, picture, videos and music to their extra storage and can access these files from the standard libraries.  A user should not have to create a VM on their microSD card or create symbolic links to get this to work properly. Most consumers would not be able to do this. Then users get frustrated when they run out or room on their main storage because nothing is automatically save to their microSD card when saved to libraries.  This is a major bug that needs to be fixed, otherwise Microsoft’s selling point of having a microSD slot is worthless. Allows users to uninstall and re-install any of the Office product that come with the Surface. That way people can free up storage space by uninstalling the Office applications they do not need. Everyone’s needs are different, so make the options flexible. Don’t take up storage space for applications the user will not use. Make the Core applications the “Cream of the Crop” Windows App Store applications. The should set the bar for all other Store applications. Improve performance as much as possible, if it seems to be sluggish on a tablet consumer will not buy it. They need to price the next line of Surface product very aggressive to undercut not only iPad but also Android low end tablets (Nook, Kindle Fire, and Nexus, etc.) Give developers incentives to write quality applications for the devices. Don’t reward developers for cranking out cookie cutter, low quality applications. I’d even suggest Microsoft consider implementing some new store certification guideline to stop these type of applications being published. Allow users to easily move the recover disk “partition between their microSD card and main storage. My Predictions for the Surface RT and Windows RT I honestly think even with all the missteps MS has made since the announcement  about the Surface product line, that they are on the right path. I was excited the Surface tablets when they were announced, and I still am. The truth be told, Windows 8 on a tablet (aka: Windows RT) is better than both iOS and Android. My nephew who is an Apple fan boy told me after he saw and used Windows 8 (he got the beta running on his iPad), that Windows 8 kicked Apples butt as a tablet OS. So there is hope for all Windows RT based tablets. I agree with my nephew and that is why whenever anyone asks me about my Surface, I love showing it off and recommend it. The 6 keys to gaining market share in the tablet market are; Aggressive pricing by both Microsoft and their OEM’s Good quality devices put out by Microsoft and their OEM’s (there are some out there, but not enough) Marketing, Marketing, Marketing from both Microsoft and their OEM’s (Need more ads showing why windows based tablets are better than iPads and Android tablets) Getting Widows tablets in retails stores all over, and giving sales people incentive to sell them. Consumers like to try electronics out before they buy them, and most will listen to what the sales person suggest. Microsoft needs sales people in retail stores directing people to buy windows based tablets over iPads and Android tablets. I think the Microsoft Stores within Best Buy is a good start, but they also need to get prominent displays in Walmart, Target, etc.. Release a smaller form factor Surface, Hopefully the 8”-10” next generation Surface is not a rumor. Make “Surface” the brand name for all Microsoft tablets and hybrid devices that they come out with. They cannot change the name with each new release.  Make Surface synonymous with quality, the same way that iPad  is for Apple. Well, that is my 2 cents on the subject. Let me know your thoughts by leaving a comment below. Soon to follow will be my thought on the Surface Pro, so keep an eye out for it. var addthis_pub="smehaffie"; var addthis_options="email, print, digg, slashdot, delicious, twitter, live, myspace, facebook, google, stumbleupon, newsvine";

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #049

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Two Connections Related Global Variables Explained – @@CONNECTIONS and @@MAX_CONNECTIONS @@CONNECTIONS Returns the number of attempted connections, either successful or unsuccessful since SQL Server was last started. @@MAX_CONNECTIONS Returns the maximum number of simultaneous user connections allowed on an instance of SQL Server. The number returned is not necessarily the number currently configured. Query Editor – Microsoft SQL Server Management Studio This post may be very simple for most of the users of SQL Server 2005. Earlier this year, I have received one question many times – Where is Query Analyzer in SQL Server 2005? I wrote small post about it and pointed many users to that post – SQL SERVER – 2005 Query Analyzer – Microsoft SQL SERVER Management Studio. Recently I have been receiving similar question. OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE SQL Server 2005 has a new OUTPUT clause, which is quite useful. OUTPUT clause has access to insert and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. OUTPUT clause can generate a table variable, a permanent table, or temporary table. Even though, @@Identity will still work with SQL Server 2005, however I find the OUTPUT clause very easy and powerful to use. Let us understand the OUTPUT clause using an example. Find Name of The SQL Server Instance Based on database server stored procedures has to run different logic. We came up with two different solutions. 1) When database schema is very much changed, we wrote completely new stored procedure and deprecated older version once it was not needed. 2) When logic depended on Server Name we used global variable @@SERVERNAME. It was very convenient while writing migrating script which depended on the server name for the same database. Explanation of TRY…CATCH and ERROR Handling With RAISEERROR Function One of the developers at my company thought that we can not use the RAISEERROR function in new feature of SQL Server 2005 TRY… CATCH. When asked for an explanation he suggested SQL SERVER – 2005 Explanation of TRY… CATCH and ERROR Handling article as excuse suggesting that I did not give example of RAISEERROR with TRY…CATCH. We all thought it was funny. Just to keep records straight, TRY… CATCH can sure use RAISEERROR function. Different Types of Cache Objects Serveral kinds of objects can be stored in the procedure cache: Compiled Plans: When the query optimizer finishes compiling a query plan, the principal output is compiled plan. Execution contexts: While executing a compiled plan, SQL Server has to keep track of information about the state of execution. Cursors: Cursors track the execution state of server-side cursors, including the cursor’s current location within a resultset. Algebrizer trees: The Algebrizer’s job is to produce an algebrizer tree, which represents the logic structure of a query. Open SSMS From Command Prompt – sqlwb.exe Example This article is written by request and suggestion of Sr. Web Developer at my organization. Due to the nature of this article most of the content is referred from Book On-Line. sqlwbcommand prompt utility which opens SQL Server Management Studio. Squib command does not run queries from the command prompt. sqlcmd utility runs queries from command prompt, read for more information. 2008 Puzzle – Solution – Computed Columns Datatype Explanation Just a day before I wrote article SQL SERVER – Puzzle – Computed Columns Datatype Explanation which was inspired by SQL Server MVP Jacob Sebastian. I suggest that before continuing this article read the original puzzle question SQL SERVER – Puzzle – Computed Columns Datatype Explanation.The question was if the computed column was of datatype TINYINT how to create a Computed Column of datatype INT? 2008 – Find If Index is Being Used in Database It is very often I get a query that how to find if any index is being used in the database or not. If any database has many indexes and not all indexes are used it can adversely affect performance. If the number of indices are higher it reduces the INSERT / UPDATE / DELETE operation but increase the SELECT operation. It is recommended to drop any unused indexes from table to improve the performance. 2009 Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries If you want to see what’s going on here, I think you need to shift your point of view from an implementation-centric view to an ANSI point of view. ANSI does not guarantee processing the order. Figure 2 is interesting, but it will be potentially misleading if you don’t understand the ANSI rule-set SQL Server operates under in most cases. Implementation thinking can certainly be useful at times when you really need that multi-million row query to finish before the backup fire off, but in this case, it’s counterproductive to understanding what is going on. SQL Server Management Studio and Client Statistics Client Statistics are very important. Many a times, people relate queries execution plan to query cost. This is not a good comparison. Both parameters are different, and they are not always related. It is possible that the query cost of any statement is less, but the amount of the data returned is considerably larger, which is causing any query to run slow. How do we know if any query is retrieving a large amount data or very little data? 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 SQL SERVER – Get Query Running in Session I was recently looking for syntax where I needed a query running in any particular session. I always remembered the syntax and ha d actually written it down before, but somehow it was not coming to mind quickly this time. I searched online and I ended up on my own article written last year SQL SERVER – Get Last Running Query Based on SPID. I felt that I am getting old because I forgot this really simple syntax. Find Total Number of Transaction on Interval In one of my recent Performance Tuning assignments I was asked how do someone know how many transactions are happening on a server during certain interval. I had a handy script for the same. Following script displays transactions happened on the server at the interval of one minute. You can change the WAITFOR DELAY to any other interval and it should work. 2011 Here are two DMV’s which are newly introduced in SQL Server 2012 and provides vital information about SQL Server. DMV – sys.dm_os_volume_stats – Information about operating system volume DMV – sys.dm_os_windows_info – Information about Operating System SQL Backup and FTP – A Quick and Handy Tool I have used this tool extensively since 2009 at numerous occasion and found it to be very impressive. What separates it from the crowd the most – it is it’s apparent simplicity and speed. When I install SQLBackupAndFTP and configure backups – all in 1 or 2 minutes, my clients are always impressed. Quick Note about JOIN – Common Questions and Simple Answers In this blog post we are going to talk about join and lots of things related to the JOIN. I recently started office hours to answer questions and issues of the community. I receive so many questions that are related to JOIN. I will share a few of the same over here. Most of them are basic, but note that the basics are of great importance. 2012 Importance of User Without Login Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Preserve Leading Zero While Coping to Excel from SSMS Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. Solution – 2 T-SQL Puzzles – Display Star and Shortest Code to Display 1 Earlier on this blog we had asked two puzzles. The response from all of you is nothing but Amazing. I have received 350+ responses. Many are valid and many were indeed something I had not thought about it. I strongly suggest you read all the puzzles and their answers here - trust me if you start reading the comments you will not stop till you read every single comment. Seriously trust me on it. Personally I have learned a lot from it. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 – Video http://www.youtube.com/watch?v=TvlYy-TGaaA Importance of User Without Login – T-SQL Demo Script Earlier I wrote a blog post about SQL SERVER – Importance of User Without Login and my friend and SQL Expert Vinod Kumar has written excellent follow up blog post about Contained Databases inside SQL Server 2012. Now lots of people asked me if I can also explain the same concept again so here is the small demonstration for it. Let me show you how login without user can help. Before we continue on this subject I strongly recommend that you read my earlier blog post here. In following demo I am going to demonstrate following situation. Login using the System Admin account Create a user without login Checking Access Impersonate the user without login Checking Access Revert Impersonation Give Permission to user without login Impersonate the user without login Checking Access Revert Impersonation Clean up Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Red Gate and .NET Reflector Debacle

    - by Rick Strahl
    About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing: Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view. For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community. Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me. Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right? Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded. People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page. Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true). In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on. Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations. There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful! Why do Companies do boneheaded stuff like this? Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells. Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing. What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right. It’s Expensive to give away stuff for Free At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products. Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch. For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades… Alternatives are coming For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore. Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library. It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • A WPF Image Button

    - by psheriff
    Instead of a normal button with words, sometimes you want a button that is just graphical. Yes, you can put an Image control in the Content of a normal Button control, but you still have the button outline, and trying to change the style can be rather difficult. Instead I like creating a user control that simulates a button, but just accepts an image. Figure 1 shows an example of three of these custom user controls to represent minimize, maximize and close buttons for a borderless window. Notice the highlighted image button has a gray rectangle around it. You will learn how to highlight using the VisualStateManager in this blog post.Figure 1: Creating a custom user control for things like image buttons gives you complete control over the look and feel.I would suggest you read my previous blog post on creating a custom Button user control as that is a good primer for what I am going to expand upon in this blog post. You can find this blog post at http://weblogs.asp.net/psheriff/archive/2012/08/10/create-your-own-wpf-button-user-controls.aspx.The User ControlThe XAML for this image button user control contains just a few controls, plus a Visual State Manager. The basic outline of the user control is shown below:<Border Grid.Row="0"        Name="borMain"        Style="{StaticResource pdsaButtonImageBorderStyle}"        MouseEnter="borMain_MouseEnter"        MouseLeave="borMain_MouseLeave"        MouseLeftButtonDown="borMain_MouseLeftButtonDown">  <VisualStateManager.VisualStateGroups>  ... MORE XAML HERE ...  </VisualStateManager.VisualStateGroups>  <Image Style="{StaticResource pdsaButtonImageImageStyle}"         Visibility="{Binding Path=Visibility}"         Source="{Binding Path=ImageUri}"         ToolTip="{Binding Path=ToolTip}" /></Border>There is a Border control named borMain and a single Image control in this user control. That is all that is needed to display the buttons shown in Figure 1. The definition for this user control is in a DLL named PDSA.WPF. The Style definitions for both the Border and the Image controls are contained in a resource dictionary names PDSAButtonStyles.xaml. Using a resource dictionary allows you to create a few different resource dictionaries, each with a different theme for the buttons.The Visual State ManagerTo display the highlight around the button as your mouse moves over the control, you will need to add a Visual State Manager group. Two different states are needed; MouseEnter and MouseLeave. In the MouseEnter you create a ColorAnimation to modify the BorderBrush color of the Border control. You specify the color to animate as “DarkGray”. You set the duration to less than a second. The TargetName of this storyboard is the name of the Border control “borMain” and since we are specifying a single color, you need to set the TargetProperty to “BorderBrush.Color”. You do not need any storyboard for the MouseLeave state. Leaving this VisualState empty tells the Visual State Manager to put everything back the way it was before the MouseEnter event.<VisualStateManager.VisualStateGroups>  <VisualStateGroup Name="MouseStates">    <VisualState Name="MouseEnter">      <Storyboard>        <ColorAnimation             To="DarkGray"            Duration="0:0:00.1"            Storyboard.TargetName="borMain"            Storyboard.TargetProperty="BorderBrush.Color" />      </Storyboard>    </VisualState>    <VisualState Name="MouseLeave" />  </VisualStateGroup></VisualStateManager.VisualStateGroups>Writing the Mouse EventsTo trigger the Visual State Manager to run its storyboard in response to the specified event, you need to respond to the MouseEnter event on the Border control. In the code behind for this event call the GoToElementState() method of the VisualStateManager class exposed by the user control. To this method you will pass in the target element (“borMain”) and the state (“MouseEnter”). The VisualStateManager will then run the storyboard contained within the defined state in the XAML.private void borMain_MouseEnter(object sender,  MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,    "MouseEnter", true);}You also need to respond to the MouseLeave event. In this event you call the VisualStateManager as well, but specify “MouseLeave” as the state to go to.private void borMain_MouseLeave(object sender, MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,     "MouseLeave", true);}The Resource DictionaryBelow is the definition of the PDSAButtonStyles.xaml resource dictionary file contained in the PDSA.WPF DLL. This dictionary can be used as the default look and feel for any image button control you add to a window. <ResourceDictionary  ... >  <!-- ************************* -->  <!-- ** Image Button Styles ** -->  <!-- ************************* -->  <!-- Image/Text Button Border -->  <Style TargetType="Border"         x:Key="pdsaButtonImageBorderStyle">    <Setter Property="Margin"            Value="4" />    <Setter Property="Padding"            Value="2" />    <Setter Property="BorderBrush"            Value="Transparent" />    <Setter Property="BorderThickness"            Value="1" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="Background"            Value="Transparent" />  </Style>  <!-- Image Button -->  <Style TargetType="Image"         x:Key="pdsaButtonImageImageStyle">    <Setter Property="Width"            Value="40" />    <Setter Property="Margin"            Value="6" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />  </Style></ResourceDictionary>Using the Button ControlOnce you make a reference to the PDSA.WPF DLL from your WPF application you will see the “PDSAucButtonImage” control appear in your Toolbox. Drag and drop the button onto a Window or User Control in your application. I have not referenced the PDSAButtonStyles.xaml file within the control itself so you do need to add a reference to this resource dictionary somewhere in your application such as in the App.xaml.<Application.Resources>  <ResourceDictionary>    <ResourceDictionary.MergedDictionaries>      <ResourceDictionary         Source="/PDSA.WPF;component/PDSAButtonStyles.xaml" />    </ResourceDictionary.MergedDictionaries>  </ResourceDictionary></Application.Resources>This will give your buttons a default look and feel unless you override that dictionary on a specific Window or User Control or on an individual button. After you have given a global style to your application and you drag your image button onto a window, the following will appear in your XAML window.<my:PDSAucButtonImage ... />There will be some other attributes set on the above XAML, but you simply need to set the x:Name, the ToolTip and ImageUri properties. You will also want to respond to the Click event procedure in order to associate an action with clicking on this button. In the sample code you download for this blog post you will find the declaration of the Minimize button to be the following:<my:PDSAucButtonImage       x:Name="btnMinimize"       Click="btnMinimize_Click"       ToolTip="Minimize Application"       ImageUri="/PDSA.WPF;component/Images/Minus.png" />The ImageUri property is a dependency property in the PDSAucButtonImage user control. The x:Name and the ToolTip we get for free. You have to create the Click event procedure yourself. This is also created in the PDSAucButtonImage user control as follows:private void borMain_MouseLeftButtonDown(object sender,  MouseButtonEventArgs e){  RaiseClick(e);}public delegate void ClickEventHandler(object sender,  RoutedEventArgs e);public event ClickEventHandler Click;protected void RaiseClick(RoutedEventArgs e){  if (null != Click)    Click(this, e);}Since a Border control does not have a Click event you will create one by using the MouseLeftButtonDown on the border to fire an event you create called “Click”.SummaryCreating your own image button control can be done in a variety of ways. In this blog post I showed you how to create a custom user control and simulate a button using a Border and Image control. With just a little bit of code to respond to the MouseLeftButtonDown event on the border you can raise your own Click event. Dependency properties, such as ImageUri, allow you to set attributes on your custom user control. Feel free to expand on this button by adding additional dependency properties, change the resource dictionary, and even the animation to make this button look and act like you want.NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A WPF Image  Button” from the drop down list.

    Read the article

  • A Closable jQuery Plug-in

    - by Rick Strahl
    In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out. Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout. /* * * Closable * * Makes selected DOM elements closable by making them * invisible when close icon is clicked * * Version 1.01 * @requires jQuery v1.3 or later * * Copyright (c) 2007-2010 Rick Strahl * http://www.west-wind.com/ * * Licensed under the MIT license: * http://www.opensource.org/licenses/mit-license.php Support CSS: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Options: * handle Element to place closebox into (like say a header). Use if main element and closebox container are two different elements. * closeHandler Function called when the close box is clicked. Return true to close the box return false to keep it visible. * cssClass The CSS class to apply to the close box DIV or IMG tag. * imageUrl Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling. * fadeOut Optional provide fadeOut speed. Default no fade out occurs */ (function ($) { $.fn.closable = function (options) { var opt = { handle: null, closeHandler: null, cssClass: "closebox", imageUrl: null, fadeOut: null }; $.extend(opt, options); return this.each(function (i) { var el = $(this); var pos = el.css("position"); if (!pos || pos == "static") el.css("position", "relative"); var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el; var div = opt.imageUrl ? $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") : $("<div>"); div.addClass(opt.cssClass) .click(function (e) { if (opt.closeHandler) if (!opt.closeHandler.call(this, e)) return; if (opt.fadeOut) $(el).fadeOut(opt.fadeOut); else $(el).hide(); }); if (opt.imageUrl) div.css("background-image", "none"); h.append(div); }); } })(jQuery); The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this: you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag: $().ready(function() { $('#ctl00_MainContent_panEditBook') .closable({ handle: $('#divEditBook_Header') }) .draggable({ dragDelay: 100, handle: '#divEditBook_Header' }) .shadow({ opacity: 0.25, offset: 6 }); }) The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file: .closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint', handle: $('#divEditBook_Header')}) so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel. More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this: function loadBooks() { showProgress(); // Clear the content $("#divBookListWrapper").empty(); var filter = $("#" + scriptVars.lstFiltersId).val(); Proxy.GetBooks(filter, function(books) { $(books).each(function(i) { updateBook(this); showProgress(true); }); }, onPageError); } function updateBook(book,highlight) { // try to retrieve the single item in the list by tag attribute id var item = $(".bookitem[tag=" +book.Pk +"]"); // grab and evaluate the template var html = parseTemplate(template, book); var newItem = $(html) .attr("tag", book.Pk.toString()) .click(function() { var pk = $(this).attr("tag"); editBook(this, parseInt(pk)); }) .closable({ closeHandler: function(e) { removeBook(this, e); }, imageUrl: "../../images/remove.gif" }); if (item.length > 0) item.after(newItem).remove(); else newItem.appendTo($("#divBookListWrapper")); if (highlight) { newItem .addClass("pulse") .effect("bounce", { distance: 15, times: 3 }, 400); setTimeout(function() { newItem.removeClass("pulse"); }, 1200); } } Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image. This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window. You can find more information about the .closable class behavior and options here: .closable Documentation Plug-ins make Server Control JavaScript much easier I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior: if (this.Closable && !string.IsNullOrEmpty(DragHandleID) ) { string imageUrl = this.CloseBoxImage; if (imageUrl == "WebResource" ) imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE); StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'"); if (!string.IsNullOrEmpty(this.DragHandleID)) closableOptions.Append(",handle: $('#" + this.DragHandleID + "')"); if (!string.IsNullOrEmpty(this.ClientDialogHandler)) closableOptions.Append(",handler: " + this.ClientDialogHandler); if (this.FadeOnClose) closableOptions.Append(",fadeOut: 'slow'"); startupScript.Append(@" .closable({ " + closableOptions + "})"); } The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least. I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control. Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have. Related Links Download jquery.closable West Wind Web Toolkit jQuery Plug-ins © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery   ASP.NET  JavaScript  

    Read the article

  • CodePlex Daily Summary for Saturday, June 02, 2012

    CodePlex Daily Summary for Saturday, June 02, 2012Popular ReleasesZXMAK2: Version 2.6.2.3: - add support for ZIP files created on UNIX system; - improve WAV support (fixed PCM24, FLOAT32; added PCM32, FLOAT64); - fix drag-n-drop on modal dialogs; - tape AutoPlay feature (thanks to Woody for algorithm)Librame Utility: Librame Utility 3.5.1: 2012-06-01 ???? ?、????(System.Web.Caching ???) 1、??????(? Librame.Settings ??); 2、?? SQL ????; 3、??????; 4、??????; ?、???? 1、????:??MD5、SHA1、SHA256、SHA384、SHA512?; 2、???????:??BASE64、DES、??DES、AES?; ?:???? GUID (???????)??KEY,?????????????。 ?、????? 1、?????、??、?????????; 2、??????????; ?:??????????????(Enum.config)。 ?、???? 1、??????、??、??、??、????????; 2、?????????????????; ?、?????? 1、????? XML ? JSON ?????????(??? XML ??); ?、????? 1、??????????(??? MediaInfo.dll(32?)??); 2、????????(??? ffmpeg...TestProject_Git: asa: asdf.Net Code Samples: Full WCF Duplex Service Example: Full WCF Duplex Service ExampleVivoSocial: VivoSocial 2012.06.01: Version 2012.06.01 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 2012.06.01 release and see if they persist. You can download the new releases from social.codeplex.com or our downloads page. If you have any questions about this release, please post them in our Support forums. If you are experiencing a bug or would like to request a new feature, please submit it to our issue tracker. This release has been tested on ...Kendo UI ASP.NET Sample Applications: Sample Applications (2012-06-01): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.Better Explorer: Better Explorer Beta 1: Finally, the first Beta is here! There were a lot of changes, including: Translations into 10 different languages (the translations are not complete and will be updated soon) Conditional Select new tools for managing archives Folder Tools tab new search bar and Search Tab new image editing tools update function many bug fixes, stability fixes, and memory leak fixes other new features as well! Please check it out and if there are any problems, let us know. :) Also, do not forge...myManga: myManga v1.0.0.3: Will include MangaPanda as a default option. ChangeLog Updating from Previous Version: Extract contents of Release - myManga v1.0.0.3.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllPlayer Framework by Microsoft: Player Framework for Windows 8 Metro (Preview 3): Player Framework for HTML/JavaScript and XAML/C# Metro Style Applications. Additional DownloadsIIS Smooth Streaming Client SDK for Windows 8 Microsoft PlayReady Client SDK for Metro Style Apps Release notes:Support for Windows 8 Release Preview (released 5/31/12) Advertising support (VAST, MAST, VPAID, & clips) Miscellaneous improvements and bug fixesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.54: Fix for issue #18161: pretty-printing CSS @media rule throws an exception due to mismatched Indent/Unindent pair.Silverlight Toolkit: Silverlight 5 Toolkit Source - May 2012: Source code for December 2011 Silverlight 5 Toolkit release.Json.NET: Json.NET 4.5 Release 6: New feature - Added IgnoreDataMemberAttribute support New feature - Added GetResolvedPropertyName to DefaultContractResolver New feature - Added CheckAdditionalContent to JsonSerializer Change - Metro build now always uses late bound reflection Change - JsonTextReader no longer returns no content after consecutive underlying content read failures Fix - Fixed bad JSON in an array with error handling creating an infinite loop Fix - Fixed deserializing objects with a non-default cons...DotNetNuke® Community Edition CMS: 06.02.00: Major Highlights Fixed issue in the Site Settings when single quotes were being treated as escape characters Fixed issue loading the Mobile Premium Data after upgrading from CE to PE Fixed errors logged when updating folder provider settings Fixed the order of the mobile device capabilities in the Site Redirection Management UI The User Profile page was completely rebuilt. We needed User Profiles to have multiple child pages. This would allow for the most flexibility by still f...????: ????2.0.1: 1、?????。WiX Toolset: WiX v3.6 RC: WiX v3.6 RC (3.6.2928.0) provides feature complete Burn with VS11 support. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/5/28/WiX-v3.6-Release-Candidate-availableJavascript .NET: Javascript .NET v0.7: SetParameter() reverts to its old behaviour of allowing JavaScript code to add new properties to wrapped C# objects. The behavior added briefly in 0.6 (throws an exception) can be had via the new SetParameterOptions.RejectUnknownProperties. TerminateExecution now uses its isolate to terminate the correct context automatically. Added support for converting all C# integral types, decimal and enums to JavaScript numbers. (Previously only the common types were handled properly.) Bug fixe...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (May 2012): Fixes: unserialize() of negative float numbers fix pcre possesive quantifiers and character class containing ()[] array deserilization when the array contains a reference to ISerializable parsing lambda function fix round() reimplemented as it is in PHP to avoid .NET rounding errors filesize bypass for FileInfo.Length bug in Mono New features: Time zones reimplemented, uses Windows/Linux databaseSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.1): New fetures:Admin enable / disable match Hide/Show Euro 2012 SharePoint lists (3 lists) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...????SDK for .Net 4.0+(OAuth2.0+??V2?API): ??V2?SDK???: ?????????API?? ???????OAuth2.0?? ????:????????????,??????????“SOURCE CODE”?????????Changeset,http://weibosdk.codeplex.com/SourceControl/list/changesets ???:????????,DEMO??AppKey????????????????,?????AppKey,????AppKey???????????,?????“????>????>????>??????”LINQ_Koans: LinqKoans v.02: Cleaned up a bitNew ProjectsAppleScript Slim: A super slimmed down library allowing you to execute AppleScript from your mono project (from your non MonoMac project).Ateneo Libri: Progetto web per la compravendita di libri universitariAzurehostedservicedashboard: Azure Hosted Service Dashboardcampus: Proyecto de pueba de MVC3 y codeplexDot Net Code Comment Analyzer: This Visual studio 2010 plugin, can count the comments in the C# code in the currently open solution in VS IDE. It shows a summary of the comments across all c# files in the project. this is useful when we want to enforce code comments , Code comments help in maintaining the code base , understanding code faster than going through the lines of code, makes code less dependant on a developer Individual.firstteamproject: H?c tfsFITClub: FITClub is platform fighting arcade game for 2 to 4 players. Enemies are controlled by AI. The goal is to force enemies down into the water or lava and keep safe from their attacks. You collect items to temporarily change your abilities. Multiplayer between more phones is coming soon.Jumpstart Branding for Sharepoint 2010: Basic Master Pages for SharePoint 2010 including a general, minified, heavily commented version of v4.master, a centered, fixed width, minified, commented Master Page and two Visual Studio 2010 solutions, one for farms and a second for sandboxes, to help you create a feature for deploying your Master Pages and other branding assets. Jumptart Branding for SP 2010 has been designed to help you quickly and easily jumpstart your next SharePoint 2010 Branding project.KelControl: Programme exe de controle d'activites. 1 - controle de la reponses de site web - http webrequest d'une Url - analyse du retour ( enetete http) - si ok ( Appel ws similaire a etat mais independant a faire apres niveau 4) - si erreur ( appel ws incrementer l'erreur) (par exemple au bout de 3 erreur declanchement alerte) dans un 1er temp on ne s'occupe pas du ws on inscrit les actions dans un fichier txt par exemple. process complet: - timer 15 minutes (param...KHTest: Visual Studio ??Librame Utility: Librame Utility 3.5.1Linux: this is the Linux project.Magic Morse: ?????????Maps: this is the Maps project.Mark Tarefas: Controlador de Tarefas para Mark AssessoriaMaxxFolderSize: MaxxUtils.FolderSizeMCTSTestCode: Project to hold code tried during learning MCTS CertificationMOBZHash: MOBZHash shows MD5 or SHA hash values for files, and reports files with identical hashes (which are most likely duplicates).NandleNF: Nandle NFNginx: this is the Nginx project. Oficina_SIGA: Siga, é um sistema de gerenciamento de oficinas.Plug-in: this is the Plug-in project.SharePoint Comments Anywhere: This is a very simple project which provides a commenting web part and a list template with the instance to store user's comments. SharePoint OTB only provides commenting capability on the Blog sites where users add their posts and anyone can view the page and add comments. Comments Anywhere can be configured on any list, pages library or any page of the SharePoint site with a web part zone enabling users to add their comments virtually anywhere you as an admin or a power user of your s...SharePoint Site Owners Webpart: SharePoint web part to display SharePoint site owners.Tarea AACQ: Proyecto para tarea FACCI 4A 01/06/12testprjct: test summaryTirailleur: Tirailleur code can be used to model an expanding wildfire (forest fire) perimeter. The code is implemented in VB.NET, and should be easy to translate to other languages. There are just a couple of classes handling the important work. These can be extracted and imported to another program. The code here includes some dummy client objects to represent the containing program. webdama: italian checkers game c#Webowo: wbowo projekt test obslugi tortoise i coldplex

    Read the article

  • CodePlex Daily Summary for Tuesday, December 21, 2010

    CodePlex Daily Summary for Tuesday, December 21, 2010Popular ReleasesSQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 7: 1. added script save/load in user query window 2. fixed problem with connection dialog when choosing windows auth but still ask for user name 3. auto open user table when double click one table node 4. improved alert message, added log only methodOpen NFe: Open NFe 2.0 (Beta): Última versão antes da versão final a ser lançada nos próximos dias.EnhSim: EnhSim 2.2.6 ALPHA: 2.2.6 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Fixing up some r...LINQ to Twitter: LINQ to Twitter Beta v2.0.18: Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4.3: Helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: Improvements for confirm, popup, popup form RenderView controller extension the user experience for crud in live demo has been substantially improved + added search all the features are shown in the live demoGanttPlanner: GanttPlanner V1.0: GanttPlanner V1.0 include GanttPlanner.dll and also a Demo application.N2 CMS: 2.1 release candidate 3: * Web platform installer support available N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) A bunch of bugs were fixed File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the templ...TweetSharp: TweetSharp v2.0.0.0 - Preview 6: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numer...Team Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.Hacker Passwords: HackerPasswords.zip: Source code, executable and documentationSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...LiveChat Starter Kit: LCSK v1.0: This is a working version of the LCSK for Visual Studio 2010, ASP.NET MVC 3 (using Razor View Engine). this is still provider based (with 1 provider Sql) and this is still using WebService and Windows Forms operator console. The solution is cleaner, with an installer to create tables etc. You can also install it via nuget (Install-Package lcsk) Let me know your feedbackOrchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...DotSpatial: DotSpatial 12-15-2010: This release contains a few minor bug fixes and hopefully the GDAL libraries for the 3.5 x86 build actually built to the correct directory this time.DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Silverlight Contrib: Silverlight Contrib 2010.1.0: 2010.1.0 New FeaturesCompatibility Release for Silverlight 4 and Visual Studio 2010New Projectsaqq2: aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2aqq2Cameleon CMS: Caméléon, le CMS open-source le plus simple d'utilisation sur le marché.Dail Emulator: Dail Emulator is a software that emulates the server software of Aeonsofts game FlyFF. Specifically made for the now defunct version 6 of the game.Dmitry Lyalin's WP7 Sample Application: A project where im putting all my public samples in a single WP7 client test applicationElixir: DotNetNuke module test framework. Use this DLL to write concise, to-the-point tests for your modules that can target multiple DNN platforms and multiple browsers.FlightTableEditor: This edits the Flight and Healing Tables of Pokémon Fire Red and Pokémon Leaf Green.Genetico UNLaM: Algoritmo Genetico UNLaMHelpCentral HelpDesk Ticket Application: An open-ended framework and client for a helpdesk ticket application and bug tracker. Built to be as extensible and available as possible, the central system is a WCF webservice which can be accessed by any client application.Import Media for Umbraco: This package allows you to import multiple or large files that have been uploaded to the server via FTP or other means. A new media item is created in the media tree and the media is moved from the upload location to the new directory for the media item.LargeCMS full sample (How to call CryptMsg API in streaming mode): LargeCms implements SignedLargeCMS and EnvelopedLargeCMS classes, my own version of .NET SignedCms and EnvelopedCMS to work with large data. It shows a way to call CryptMsg API in streaming mode to overcome the limitations of current .NET classesLinq.Autocompiler: Adds runtime, on demand compilation of Linq queries (in Linq to SQL), which creates CompiledQuery instances and stores them in a cache. Mathetris: it's a game.MonoWiimote: Managed c# library for using Wii Remote device under Mono/Linux enviroment. It requires libcwiimote-0.4 (sudo apt-get install libcwiimote-0.4).Movie Captor: Concorrendo no Desafio .NETNSpotifyLib: A .NET library contains wrapper classes for the Spotify Metadata API. http://developer.spotify.com/en/metadata-api/overview/ This library supports searching for artists, albums and tracks by using the Spotify Metadata API. In the future I'll add support for Lookups as well.Silverhawk: It's developed in C# programming language.SimpleSpy: Detecting object handle on cursor move and get its information. Basically this is a bare bone program for detecting object handle at cursor and add your own logic to detect for further information. SunburnJitter: JitterSunburn contains helper classes and methods to use the jitter physics engine easier in your sunburn projects. Simple component based shapes will provide easy editing in editor. It's developed in C#.Sunrod: Sunrod is a .NET wrapper for the Obsidian Portal API.TouchStack - A MonoTouch ServiceStack.NET client: TouchStack makes it easier for MonoTouch developers to consume Web services created and exposed by the brilliant ServiceStack.NET webservice framework. TouchStack is written in C# and uses the MonoTouch library.VNTViewer: Viewer for notes/memos (VNT files)VpNet: Official .NET Wrapper for Virtual Paradise.xlGUI: Streamlet GUI Framework

    Read the article

  • HTG Explains: Just How Bad Are Android Tablet Apps?

    - by Chris Hoffman
    Apple loves to criticize the state of Android tablet apps when pushing its own iPad tablets. But just how bad is the Android tablet app situation? Should you avoid Android tablets like the Nexus 7 because of the apps? It’s clear that Apple’s iPad is way ahead when it comes to the sheer quantity of tablet-optimized apps. It’s also clear that some popular apps — particularly touch-optimized games — only show up on iPad. But that’s not the whole story. The Basics First, let’s get an idea of the basic stuff that will work well for you on Android. An excellent web browser. Chrome has struggled with performance on Android, but hits its stride on the Nexus 7 (2013). Great, tablet-optimized apps for all of Google’s services, from YouTube to Gmail and Google Maps. Everything you need for reading, from Amazon’s Kindle app for eBooks, Flipboard and Feedly for new articles from websites, and other services like the popular Pocket read-it-later service. Apps for most popular media services, from Netflix, Hulu, and YouTube for videos to Pandora, Spotify, and Rdio for music. A few things aren’t available — you won’t find Apple’s iTunes and Amazon still doesn’t offer an Amazon Instant Video app for Android, while they do for iPad and even their own Android-based Kindle Fire devices. Android has very good app coverage when it comes to consuming content, whether you’re reading websites and ebooks or watching videos and listening to music. You can play almost any Android smartphone game, too. For content consumption, Android is better than something like Windows 8, which lacks apps for Google services like YouTube and still doesn’t have apps for popular media services like Spotify and Rdio. How Android Scales Smartphone Apps Let’s look at how Android scales smartphone apps. Now, bear with us here — we know “scaling” is a dirty word considering how poorly Apple’s iPad scales iPhone apps, but it’s not as bad on Android. When an iPad runs an iPhone app, it simply doubles the pixels and effectively zooms in. For example, if you had  Twitter app with five tweets visible at once on an iPhone and ran the same app on an iPad, the iPad would simply “zoom in” and enlarge the same screen — you’d still see five tweets, but each tweet would appear larger. This is why developers create optimized iPad apps with their own interfaces. It’s especially important on Apple’s iOS. Android devices come in all shapes and sizes, so Android apps have a smarter, more intelligent way to adapt to different screen sizes. Let’s say you have a Twitter app designed for smartphones and it only shows five tweets at once when run on a phone. If you ran the same app on a tablet, you wouldn’t see the same five tweets — you’d see ten or more tweets. Rather than simply zooming in, the app can show more content at the same time on a tablet, even if it was never optimized for tablet-size screens. While apps designed for smartphones aren’t generally ideal, they adapt much better on Android than they do on an iPad. This is particularly true when it comes to games. You’re capable of playing almost any Android smartphone game on an Android tablet, and games generally adapt very well to the larger screen. This gives you access to a huge catalog of games. It’s a great option to have, especially when you look at Microsoft’s Window 8 and consider how much better the touch-based app and game selection would be if Microsoft allowed its users to run Windows Phone games on Windows 8. 7-inch vs 10-inch Tablets The Twitter example above wasn’t just an example. The official Twitter app for Android still doesn’t have a tablet-optimized interface, so this is the sort of situation you’d have to deal with on an Android tablet. On the popular Nexus 7, Twitter is an example of a smartphone app that actually works fairly well — in portrait mode, you can see many more tweets on screen at the same time and none of the space really feels all that wasted. This is important to consider — smartphone apps like Twitter often scale quite well to 7-inch screens because a 7-inch screen is much closer in form factor to a smartphone than a 10-inch screen is. When you begin to look at 10-inch Android tablets that are the same size as an iPad, the situation changes. While the Twitter app works well enough on a Nexus 7, it looks horrible on a Nexus 10 or other 10-inch tablet. Running many smartphone-designed apps — possible with the exception of games — on a 10-inch tablet is a frustrating, poor experience. There’s much more white, empty space in the interface. It feels like you’re using a smartphone app on a large screen, and what’s the point of that? A tablet-optimized Twitter app for Android is finally on its way, but this same situation will repeat with many other types of apps. For example, Facebook doesn’t offer a tablet-optimized interface, but it’s okay on a Nexus 7 anyway. On a 10-inch screen, it probably wouldn’t be anywhere near as nice an experience. It goes without saying that Facebook and Twitter both offer iPad apps with interfaces designed for a tablet-size screen. Here’s another problematic app — the official Yelp app for Android. Even just using it on a 7-inch Nexus 7 will be a poor experience, while it would be much worse on a larger 10-inch tablet app. Now, it’s true that many — maybe even most — of the popular apps you might want to run today are optimized for Android tablets. But, when you look at the situation when it comes to popular apps like Twitter, Facebook, and Yelp, it’s clear Android is still behind in a meaningful way. Price Let’s be honest. The thing that really makes Android tablets compelling — and the only reason Android tablets started seeing real traction after years of almost complete dominance by Apple’s iPads — is that Android tablets are available for so much cheaper than iPads. Google’s latest Nexus 7 (2013) is available for only $230. Apple’s non-retina iPad Mini is available at $300, which is already $70 more. In spite of that, the iPad Mini has much older, slower internals and a much lower resolution screen. It’s not as nice to look at when it comes to reading or watching movies, and the iPad Mini reportedly struggles to run Apple’s latest iOS 7. In contrast, the new Nexus 7 has a very high resolution screen, speedy internals, and runs Android very well with little-to-no lag in real use. We haven’t had any problems with it, unlike all the problems we unfortunately encountered with the first Nexus 7. For a really comparable experience to the current Nexus 7, you’d want to get one of Apple’s new retina iPad Minis. That would cost you $400, another $170 over the Nexus 7. In fact, it’s possible to regularly find sales on the Nexus 7, so if you waited you could get it for just $200 — half the price of the iPad mini with a comparable screen and internals. (In fairness, the iPad certainly has better hardware — but you won’t feel if it you’re just using your tablet to browse the web, watch videos, and do other typical tablet things.) This makes a tablet like the popular Nexus 7 a very good option for budget-conscious users who just want a high-quality device they can use to browse the web, watch videos, play games, and generally do light computing. There’s a reason we’re focusing on the Nexus 7 here. The combination of price and size brings it to a very good place. It’s awfully cheap for the high-quality experience you get, and the 7-inch screen means that even the non-tablet-optimized apps you may stumble across will often work fairly well. On the other hand, more expensive 10-inch Android tablets are still a tougher sell. For $400-$500, you’re getting awfully close to Apple’s full-size iPad price range and Android tablets don’t have as good an app ecosystem as an iPad. It’s hard to recommend an expensive, 10-inch Android tablet over a full-size iPad to average users. In summary, the Android app tablet app situation is nowhere near as bad as it was a few years ago. The success of the Nexus 7 proves that Android tablets can be compelling experiences, and there are a wide variety of strong apps. That said, more expensive 10-inch Android tablets that compete directly with the full-size iPad on price still don’t make much sense for most people.  Unless you have a specific reason for preferring an Android tablet, it’s tough not to recommend an iPad if you’re looking at spending $400+ on a 10-inch tablet. Image Credit: Christian Ghanime on Flickr, Christian Ghanime on Flickr     

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86  | Next Page >