Search Results

Search found 12878 results on 516 pages for 'self organizing maps'.

Page 218/516 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Vocabulary: Should I call this apply or map?

    - by Carlos Vergara
    So, I'm tasked with organizing the code and building a library with all the common code among our products. One thing that seems to happen all the time and I wanted to abstract is posted below in pseudocode, and I don't know how to call it (different products have different domain specific implementations and names for it) list function idk_what_to_name_it ( list list_of_callbacks, value common_parameter ): list list_of_results = new list for_each(callback in list_of_callbacks) list_of_results.push(callback(common_parameter)) end for_each return list_of_results end function Would you call this specific construct a list ListOfCallbacks.Map( value value_to_map) method or would it better be value Value.apply(list list_of_callbacks) I'm really curious about this kind of thing. Is there a standard guide for this stuff?

    Read the article

  • Unlock the Value of Big Data

    - by Mike.Hallett(at)Oracle-BI&EPM
    Partners should read this comprehensive new e-book to get advice from Oracle and industry leaders on how you can use big data to generate new business insights and make better decisions for your customers. “Big data represents an opportunity averaging 14% of current revenue.” —From the Oracle big data e-book, Meeting the Challenge of Big Data You’ll gain instant access to: Straightforward approaches for acquiring, organizing, and analyzing data Architectures and tools needed to integrate new data with your existing investments Survey data revealing how leading companies are using big data, so you can benchmark your progress Expert resources such as white papers, analyst videos, 3-D demos, and more If you want to be ready for the data deluge, Meeting the Challenge of Big Data is a must-read. Register today for the e-book and read it on your computer or Apple iPad.  

    Read the article

  • What would one call this architecture?

    - by Chris
    I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?

    Read the article

  • Are areas a good organizational feature, or just extra work?

    - by SOfanatic
    Do Areas in ASP.NET MVC end up being a help or just a drag in the end (because of the URL construction)? Would it be better to have subdirectories inside the main Controllers folder? or are there any other options to organizing a project? EDIT For example, this is your average link without Areas: @Html.ActionLink("Home","Index","Home") and this is your average link with Areas: @Html.ActionLink("Home","Index", new { Area = "", Controller = "Home"}) Could the following work? (Main controller with subdirectories) I'm just trying to find out if implementing Areas in a project is worthwhile, because I also read that it can be problematic when using Dependency Injection. And is there an alternative to Areas?

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Cisco 881 losing NAT NVI translation config after reload

    - by MasterRoot24
    This is a weird one, so I'll try to explain in as much detail as I can so I'm giving the whole picture. As I've mentioned in my other questions, I'm in the process of setting up a new Cisco 881 as my WAN router and NAT firewall. I'm facing an issue where NAT NVI rules that I have configured are not enabled after a reload of the router, regardless of the fact that they are present in the startup-config. In order to clarify this a little, here's the relevant section of my current running-config: Router1#show running-config | include nat source ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.x 1723 interface FastEthernet4 1723 ip nat source static tcp 192.168.1.x 80 interface FastEthernet4 80 ip nat source static tcp 192.168.1.x 443 interface FastEthernet4 443 ip nat source static tcp 192.168.1.x 25 interface FastEthernet4 25 ip nat source static tcp 192.168.1.x 587 interface FastEthernet4 587 ip nat source static tcp 192.168.1.x 143 interface FastEthernet4 143 ip nat source static tcp 192.168.1.x 993 interface FastEthernet4 993 ...and here's the mappings 'in action': Router1#show ip nat nvi translations | include --- tcp <WAN IP>:25 192.168.1.x:25 --- --- tcp <WAN IP>:80 192.168.1.x:80 --- --- tcp <WAN IP>:143 192.168.1.x:143 --- --- tcp <WAN IP>:443 192.168.1.x:443 --- --- tcp <WAN IP>:587 192.168.1.x:587 --- --- tcp <WAN IP>:993 192.168.1.x:993 --- --- tcp <WAN IP>:1723 192.168.1.x:1723 --- --- ...and here's proof that the mappings are saved to startup-config: Router1#show startup-config | include nat source ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.x 1723 interface FastEthernet4 1723 ip nat source static tcp 192.168.1.x 80 interface FastEthernet4 80 ip nat source static tcp 192.168.1.x 443 interface FastEthernet4 443 ip nat source static tcp 192.168.1.x 25 interface FastEthernet4 25 ip nat source static tcp 192.168.1.x 587 interface FastEthernet4 587 ip nat source static tcp 192.168.1.x 143 interface FastEthernet4 143 ip nat source static tcp 192.168.1.x 993 interface FastEthernet4 993 However, look what happens after a reload of the router: Router1#reload Proceed with reload? [confirm]Connection to router closed by remote host. Connection to router closed. $ ssh joe@router Password: Authorized Access only Router1>en Password: Router1#show ip nat nvi translations | include --- Router1# Router1#show ip nat translations | include --- tcp 188.222.181.173:25 192.168.1.2:25 --- --- tcp 188.222.181.173:80 192.168.1.2:80 --- --- tcp 188.222.181.173:143 192.168.1.2:143 --- --- tcp 188.222.181.173:443 192.168.1.2:443 --- --- tcp 188.222.181.173:587 192.168.1.2:587 --- --- tcp 188.222.181.173:993 192.168.1.2:993 --- --- tcp 188.222.181.173:1723 192.168.1.2:1723 --- --- Router1# Here's proof that the running config should have the mappings setup as NVI: Router1#show running-config | include nat source ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.2 1723 interface FastEthernet4 1723 ip nat source static tcp 192.168.1.2 80 interface FastEthernet4 80 ip nat source static tcp 192.168.1.2 443 interface FastEthernet4 443 ip nat source static tcp 192.168.1.2 25 interface FastEthernet4 25 ip nat source static tcp 192.168.1.2 587 interface FastEthernet4 587 ip nat source static tcp 192.168.1.2 143 interface FastEthernet4 143 ip nat source static tcp 192.168.1.2 993 interface FastEthernet4 993 At this point, the mappings are not working (inbound connections from WAN on the HTTP/IMAP fail). I presume that this is because my interfaces are using ip nat enable for use with NVI mappings, instead of ip nat inside/outside. So, I re-apply the mappings: Router1#configure ter Router1#configure terminal Enter configuration commands, one per line. End with CNTL/Z. Router1(config)#ip nat source static tcp 192.168.1.2 1723 interface FastEthernet4 1723 Router1(config)#ip nat source static tcp 192.168.1.2 80 interface FastEthernet4 80 Router1(config)#ip nat source static tcp 192.168.1.2 443 interface FastEthernet4 443 Router1(config)#ip nat source static tcp 192.168.1.2 25 interface FastEthernet4 25 Router1(config)#ip nat source static tcp 192.168.1.2 587 interface FastEthernet4 587 Router1(config)#ip nat source static tcp 192.168.1.2 143 interface FastEthernet4 143 Router1(config)#ip nat source static tcp 192.168.1.2 993 interface FastEthernet4 993 Router1(config)#end ... then they show up correctly: Router1#show ip nat nvi translations | include --- tcp 188.222.181.173:25 192.168.1.2:25 --- --- tcp 188.222.181.173:80 192.168.1.2:80 --- --- tcp 188.222.181.173:143 192.168.1.2:143 --- --- tcp 188.222.181.173:443 192.168.1.2:443 --- --- tcp 188.222.181.173:587 192.168.1.2:587 --- --- tcp 188.222.181.173:993 192.168.1.2:993 --- --- tcp 188.222.181.173:1723 192.168.1.2:1723 --- --- Router1# Router1#show ip nat translations | include --- Router1# ... furthermore, now from both WAN and LAN, the services mapped above now work until the next reload. All of the above is required every time I have to reload the router (which is all too often at the moment :-( ). Here's my full current config: ! ! Last configuration change at 20:20:15 UTC Tue Dec 11 2012 by xxx version 15.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname xxx ! boot-start-marker boot-end-marker ! ! enable secret 4 xxxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 xxx quit ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! ip domain list dmz.xxx.local ip domain list xxx.local ip domain name dmz.xxx.local ip name-server 192.168.1.x ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn xxx ! ! username admin privilege 15 secret 4 xxx username joe secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp ip access-group 101 in ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.x 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.x 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ! ! ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.x 1723 interface FastEthernet4 1723 ! ! access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.1.0 0.0.0.255 access-list 101 permit udp 193.x.x.0 0.0.0.255 any eq 5060 access-list 101 deny udp any any eq 5060 access-list 101 permit ip any any ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 exec-timeout 15 0 login authentication local_auth line aux 0 exec-timeout 15 0 login authentication local_auth line vty 0 4 access-class 2 in login authentication local_auth length 0 transport input all ! ! end I'd appreciate it greatly if anyone can help me find out why these mappings are not setup correctly using the saved config after a reload.

    Read the article

  • VS 2010 SP1 (Beta) and IIS Express

    - by ScottGu
    Last month we released the VS 2010 Service Pack 1 (SP1) Beta.  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.  You can download and install the VS 2010 SP1 Beta here. IIS Express Earlier this summer I blogged about IIS Express.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  We think it combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into VS today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios. Visual Studio 2010 SP1 adds support for IIS Express – and you can start to take advantage of this starting with last month’s VS 2010 SP1 Beta release. Downloading and Installing IIS Express IIS Express isn’t included as part of the VS 2010 SP1 Beta.  Instead it is a separate ~4MB download which you can download and install using this link (it uses WebPI to install it).  Once IIS Express is installed, VS 2010 SP1 will enable some additional IIS Express commands and dialog options that allow you to easily use it. Enabling IIS Express for Existing Projects Visual Studio today defaults to using the built-in ASP.NET Development Server (aka Cassini) when running ASP.NET Projects: Converting your existing projects to use IIS Express is really easy.  You can do this by opening up the project properties dialog of an existing project, and then by clicking the “web” tab within it and selecting the “Use IIS Express” checkbox. Or even simpler, just right-click on your existing project, and select the “Use IIS Express…” menu command: And now when you run or debug your project you’ll see that IIS Express now starts up and runs automatically as your web-server: You can optionally right-click on the IIS Express icon within your system tray to see/browse all of sites and applications running on it: Note that if you ever want to revert back to using the ASP.NET Development Server you can do this by right-clicking the project again and then select the “Use Visual Studio Development Server” option (or go into the project properties, click the web tab, and uncheck IIS Express).  This will revert back to the ASP.NET Development Server the next time you run the project. IIS Express Properties Visual Studio 2010 SP1 exposes several new IIS Express configuration options that you couldn’t previously set with the ASP.NET Development Server.  Some of these are exposed via the property grid of your project (select the project node in the solution explorer and then change them via the property window): For example, enabling something like SSL support (which is not possible with the ASP.NET Development Server) can now be done simply by changing the “SSL Enabled” property to “True”: Once this is done IIS Express will expose both an HTTP and HTTPS endpoint for the project that we can use: SSL Self Signed Certs IIS Express ships with a self-signed SSL cert that it installs as part of setup – which removes the need for you to install your own certificate to use SSL during development.  Once you change the above drop-down to enable SSL, you’ll be able to browse to your site with the appropriate https:// URL prefix and it will connect via SSL. One caveat with self-signed certificates, though, is that browsers (like IE) will go out of their way to warn you that they aren’t to be trusted: You can mark the certificate as trusted to avoid seeing dialogs like this – or just keep the certificate un-trusted and press the “continue” button when the browser warns you not to trust your local web server. Additional IIS Settings IIS Express uses its own per-user ApplicationHost.config file to configure default server behavior.  Because it is per-user, it can be configured by developers who do not have admin credentials – unlike the full IIS.  You can customize all IIS features and settings via it if you want ultimate server customization (for example: to use your own certificates for SSL instead of self-signed ones). We recommend storing all app specific settings for IIS and ASP.NET within the web.config file which is part of your project – since that makes deploying apps easier (since the settings can be copied with the application content).  IIS (since IIS 7) no longer uses the metabase, and instead uses the same web.config configuration files that ASP.NET has always supported – which makes xcopy/ftp based deployment much easier. Making IIS Express your Default Web Server Above we looked at how we can convert existing sites that use the ASP.NET Developer Web Server to instead use IIS Express.  You can configure Visual Studio to use IIS Express as the default web server for all new projects by clicking the Tools->Options menu  command and opening up the Projects and Solutions->Web Projects node with the Options dialog: Clicking the “Use IIS Express for new file-based web site and projects” checkbox will cause Visual Studio to use it for all new web site and projects. Summary We think IIS Express makes it even easier to build, run and test web applications.  It works with all versions of ASP.NET and supports all ASP.NET application types (including obviously both ASP.NET Web Forms and ASP.NET MVC applications).  Because IIS Express is based on the IIS 7.5 codebase, you have a full web-server feature-set that you can use.  This means you can build and run your applications just like they’ll work on a real production web-server.  In addition to supporting ASP.NET, IIS Express also supports Classic ASP and other file-types and extensions supported by IIS – which also makes it ideal for sites that combine a variety of different technologies. Best of all – you do not need to change any code to take advantage of it.  As you can see above, updating existing Visual Studio web projects to use it is trivial.  You can begin to take advantage of IIS Express today using the VS 2010 SP1 Beta. Hope this helps, Scott

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • Adding Volcanos and Options - Earthquake Locator, part 2

    - by Bobby Diaz
    Since volcanos are often associated with earthquakes, and vice versa, I decided to show recent volcanic activity on the Earthquake Locator map.  I am pulling the data from a website created for a joint project between the Smithsonian's Global Volcanism Program and the US Geological Survey's Volcano Hazards Program, found here.  They provide a Weekly Volcanic Activity Report as an RSS feed.   I started implementing this new functionality by creating a new Volcano entity in the domain model and adding the following to the EarthquakeService class (I also factored out the common reading/parsing helper methods to a separate FeedReader class that can be used by multiple domain service classes):           private static readonly string VolcanoFeedUrl =             ConfigurationManager.AppSettings["VolcanoFeedUrl"];           /// <summary>         /// Gets the volcano data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Volcano"/> objects.</returns>         public IQueryable<Volcano> GetVolcanos()         {             var feed = FeedReader.Load(VolcanoFeedUrl);             var list = new List<Volcano>();               if ( feed != null )             {                 foreach ( var item in feed.Items )                 {                     var quake = CreateVolcano(item);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates a <see cref="Volcano"/> object for each item in the RSS feed.         /// </summary>         /// <param name="item">The RSS item.</param>         /// <returns></returns>         private Volcano CreateVolcano(SyndicationItem item)         {             Volcano volcano = null;             string title = item.Title.Text;             string desc = item.Summary.Text;             double? latitude = null;             double? longitude = null;               FeedReader.GetGeoRssPoint(item, out latitude, out longitude);               if ( !String.IsNullOrEmpty(title) )             {                 title = title.Substring(0, title.IndexOf('-'));             }             if ( !String.IsNullOrEmpty(desc) )             {                 desc = String.Join("\n\n", desc                         .Replace("<p>", "")                         .Split(                             new string[] { "</p>" },                             StringSplitOptions.RemoveEmptyEntries)                         .Select(s => s.Trim())                         .ToArray())                         .Trim();             }               if ( latitude != null && longitude != null )             {                 volcano = new Volcano()                 {                     Id = item.Id,                     Title = title,                     Description = desc,                     Url = item.Links.Select(l => l.Uri.OriginalString).FirstOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault()                 };             }               return volcano;         } I then added the corresponding LoadVolcanos() method and Volcanos collection to the EarthquakeViewModel class in much the same way I did with the Earthquakes in my previous article in this series. Now that I am starting to add more information to the map, I wanted to give the user some options as to what is displayed and allowing them to choose what gets turned off.  I have updated the MainPage.xaml to look like this:   <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:basic="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>           <DataTemplate x:Key="VolcanoTemplate">             <Polygon Fill="Gold" Stroke="Black" StrokeThickness="1" Points="0,10 5,0 10,10"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center"                      MouseLeftButtonUp="Volcano_MouseLeftButtonUp">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="Click icon for more information..." />                     </StackPanel>                 </ToolTipService.ToolTip>             </Polygon>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">               <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />               <bing:MapItemsControl ItemsSource="{Binding Volcanos}"                                   ItemTemplate="{StaticResource VolcanoTemplate}" />         </bing:Map>           <basic:TabControl x:Name="tabs" VerticalAlignment="Bottom" MaxHeight="25" Opacity="0.7">             <basic:TabItem Margin="90,0,-90,0" MouseLeftButtonUp="TabItem_MouseLeftButtonUp">                 <basic:TabItem.Header>                     <TextBlock x:Name="txtHeader" Text="Options"                                FontSize="13" FontWeight="Bold" />                 </basic:TabItem.Header>                   <StackPanel Orientation="Horizontal">                     <TextBlock Text="Earthquakes:" FontWeight="Bold" Margin="3" />                     <StackPanel Margin="3">                         <CheckBox Content=" &lt; 4.0"                                   IsChecked="{Binding ShowLt4, Mode=TwoWay}" />                         <CheckBox Content="4.0 - 4.9"                                   IsChecked="{Binding Show4s, Mode=TwoWay}" />                         <CheckBox Content="5.0 - 5.9"                                   IsChecked="{Binding Show5s, Mode=TwoWay}" />                     </StackPanel>                       <StackPanel Margin="10,3,3,3">                         <CheckBox Content="6.0 - 6.9"                                   IsChecked="{Binding Show6s, Mode=TwoWay}" />                         <CheckBox Content="7.0 - 7.9"                                   IsChecked="{Binding Show7s, Mode=TwoWay}" />                         <CheckBox Content="8.0 +"                                   IsChecked="{Binding ShowGe8, Mode=TwoWay}" />                     </StackPanel>                       <TextBlock Text="Other:" FontWeight="Bold" Margin="50,3,3,3" />                     <StackPanel Margin="3">                         <CheckBox Content="Volcanos"                                   IsChecked="{Binding ShowVolcanos, Mode=TwoWay}" />                     </StackPanel>                 </StackPanel>               </basic:TabItem>         </basic:TabControl>       </Grid> </UserControl> Notice that I added a VolcanoTemplate that uses a triangle-shaped Polygon to represent the Volcano locations, and I also added a second <bing:MapItemsControl /> tag to the map to bind to the Volcanos collection.  The TabControl found below the map houses the options panel that will present the user with several checkboxes so they can filter the different points based on type and other properties (i.e. Magnitude).  Initially, the TabItem is collapsed to reduce it's footprint, but the screen shot below shows the options panel expanded to reveal the available settings:     I have updated the Source Code and Live Demo to include these new features.   Happy Mapping!

    Read the article

  • Silverlight Cream for March 08, 2010 -- #809

    - by Dave Campbell
    In this Issue: Michael Washington, Tim Greenfield, Bobby Diaz(-2-), Glenn Block(-2-), Nikhil Kothari, Jianqiang Bao(-2-), and Christopher Bennage. Shoutouts: Adam Kinney announced a Big update for the Project Rosetta site today Arpit Gupta has opened a new blog with a great logo: I think therefore I am dangerous :) From SilverlightCream.com: DotNetNuke Silverlight Traffic Module If it's DNN and Silverlight, it has to be my buddy Michael Washington :) ... Michael has combined those stunning gauges you've seen with website traffic... just too cool!... grab the code and display yours too! Cool demonstration of Silverlight VideoBrush This is a no-code post by Tim Greenfield, but I like the UX on this Jigsaw Puzzle page... and you can make your own. Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1 Bobby Diaz has an informative post up on combining earthquake data with BingMaps in Silverlight 3... check it out, the grab the recently posted Live Demo and Source Code Adding Volcanos and Options - Earthquake Locator, part 2 Bobby Diaz also added volcanic activity to his earthquake BinMaps app, and updated the downloadable code and live demo. Building Hello MEF – Part IV – DeploymentCatalog Glenn Block posted a pair of MEF posts yesterday... made me think I missed one :) .. the first one is about the DeploymentCatalog. Note he is going to be using the CodePlex bits in his posts. Building HelloMEF – Part V – Refactoring to ViewModel Glenn Block's part V is about MEF and MVVM -- no, really! ... he is refactoring MVVM into the app with a nod to Josh Smith and Laurent Bugnion... get your head around this... The Case for ViewModel Nikhil Kothari has a post up about the ViewModel, and how it facilitates designer/developer workflow, jumpstarts development, improves scaling, and makes asynch programming development simpler MMORPG programming in Silverlight Tutorial (12)Map Instance (Part I) Jianqiang Bao has part 12 of his MMORPG game up... this one is showing how to deal with obstuctions on maps. MMORPG programming in Silverlight Tutorial (13)Perfect moving mechanism Jianqiang Bao also has part 13 up, and this second one is about sprite movement around the obstructions. 1 Simple Step for Commanding in Silverlight Christopher Bennage blogged about Commanding in Silverlight, he begins with a blog post about commands in Silverlight 4 then goes on to demonstrate the Caliburn way of doing commanding. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • Silverlight Cream for March 10, 2010 -- #810

    - by Dave Campbell
    In this Issue: Andrea Boschin, Jeremy Likness(-2-), Andrew Veresov, Nokola, SilverLaw, Gill Cleeren, Jim Wightman and Jeremy Likness, Viktor Larsson(-2-), and Walter Ferrari. Shoutouts: Viktor Larsson has a post up about Silverlight Market Penetration ... hope to meet you at MIX10, Viktor! Gergely Orosz has posted the Slides and code for the presentation “An Introduction to Silverlight” It appears that if I miss a day, I can pretty much do an all-submittal post :) From SilverlightCream.com: Writing an AsyncLoader to enqueue long running operations Andrea Boschin has a tutorial on SilverlightShow where he's building up an asynch service to deal with a long-running app on the server. MVVM with MEF in Silverlight: Video Tutorial Jeremy Likness has a video tutorial up for helping beginners wire up MVVM and MEF to Silverlight. Source code for the app in the video is downloadable. MVVM with MEF in Silverlight Video Tutorial Part 2: Plugins and Metadata In part 2, Jeremy Likness redesigns the app using metadata to turn the shapes into objects, and then show how easy it is to add a new plugin... and the source for the app is downloadable. Binding a Converter Parameter Andrew Veresov has a nice code-filled solution up for those times that you need to bind a ConverterParameter value. EasyPainter: Lion Hair styling Nokola has not been idle with Easy Painter... now he's added "Lion Hair" to the list of stylings you can apply... guess if you want to change someone's 'mane' ... sorry! Twisting Navigation - Silverlight 3 SilverLaw has another control up - a "Twisting Navigation" control... very cool :) ... and since I'm behind the curve, he already has an update in the Expression Gallery as noted in his post, and a video tutorial on implementing it in an application... and if you understand German, turn up the sound :) Uploading and downloading images using a WCF service with Silverlight Gill Cleeren has a tutorial up at SilverlightShow on uploading and downloading images using WCF Services in Silverlight New Windows Phone 7 Community Developer Hub Jim Wightman and Jeremy Likness have a very cool Silverlight page up where you can paste the URL of your XAP in and have it display in a "Windows 7 Series Phone" ... and that's all I'm saying about that. XAML Transformation 101 Viktor Larsson is discussing Transforms in XAML and has a nice tutorial up that is easily the beginning of a carousel... you may also want to check out his other posts... I'm adding him to my list. Silverlight 4 Webcam Demo In this post, Viktor Larsson has a tutorial up for using the WebCam. This is from a beginner perspective, so if you haven't jumped in, now's a good time. How to extend Bing Maps Silverlight with an elevation profile graph - Part 1 Walter Ferrari has a post up at SilverlightShow discussing extensions to BingMaps such as creating routes using GeoCoding and Route Services plus drawing lines on the maps and getting coordinates of the points. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • SEO made easy with IIS URL Rewrite 2.0 SEO templates

    - by The Official Microsoft IIS Site
    A few weeks ago my team released the version 2.0 of the URL Rewrite for IIS . URL Rewrite is probably the most powerful Rewrite engine for Web Applications. It gives you many features including Inbound Rewriting (ie. Rewrite the URL, Redirect to another URL, Abort Requests, use of Maps, and more), and in Version 2.0 it also includes Outbound Rewriting so that you can rewrite URLs or any markup as the content is being sent back even if its generated using PHP, ASP.NET or any other technology. It also...(read more)

    Read the article

  • Daily tech links for .net and related technologies - May 26-29, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - May 26-29, 2010 Web Development Porting MVC Music Store to Raven: StoreController - Ayende Building a Store Locator ASP.NET Application Using Google Maps API - Scott Mitchell Anti-Forgery Request Recipes For ASP.NET MVC And AJAX - Dixin How to Localize an ASP.NET MVC Application - Michael Ceranski Tekpub ASP.NET MVC 2 Starter Site 0.5 Released - Rob Conery How to use Google Data API in ASP.NET MVC. Part 2 - Mahdi jQuery.validate and Html.ValidationSummary...(read more)

    Read the article

  • How to manage memory using classes in Objective-C?

    - by Flipper
    This is my first time creating an iPhone App and I am having difficulty with the memory management because I have never had to deal with it before. I have a UITableViewController and it all works fine until I try to scroll down in the simulator. It crashes saying that it cannot allocate that much memory. I have narrowed it down to where the crash is occurring: - (UITableViewCell *)tableView:(UITableView *)aTableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // Dequeue or create a cell UITableViewCellStyle style = UITableViewCellStyleDefault; UITableViewCell *cell = [aTableView dequeueReusableCellWithIdentifier:@"BaseCell"]; if (!cell) cell = [[[UITableViewCell alloc] initWithStyle:style reuseIdentifier:@"BaseCell"] autorelease]; NSString* crayon; // Retrieve the crayon and its color if (aTableView == self.tableView) { crayon = [[[self.sectionArray objectAtIndex:indexPath.section] objectAtIndex:indexPath.row] getName]; } else { crayon = [FILTEREDKEYS objectAtIndex:indexPath.row]; } cell.textLabel.text = crayon; if (![crayon hasPrefix:@"White"]) cell.textLabel.textColor = [self.crayonColors objectForKey:crayon]; else cell.textLabel.textColor = [UIColor blackColor]; return cell; } Here is the getName method: - (NSString*)getName { return name; } name is defined as: @property (nonatomic, retain) NSString *name; Now sectionArray is an NSMutableArray with instances of a class that I created Term in it. Term has a method getName that returns a NSString*. The problem seems to be the part of where crayon is being set and getName is being called. I have tried adding autorelease, release, and other stuff like that but that just causes the entire app to crash before even launching. Also if I do: cell.textLabel.text = @"test"; //crayon; /*if (![crayon hasPrefix:@"White"]) cell.textLabel.textColor = [self.crayonColors objectForKey:crayon]; else cell.textLabel.textColor = [UIColor blackColor];*/ Then I get no error whatsoever and it all scrolls just fine. Thanks in advance for the help! Edit: Here is the full Log of when I try to run the app and the error it gives when it crashes: [Session started at 2010-12-29 04:23:38 -0500.] [Session started at 2010-12-29 04:23:44 -0500.] GNU gdb 6.3.50-20050815 (Apple version gdb-967) (Tue Jul 14 02:11:58 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 1429. gdb-i386-apple-darwin(1430,0x778720) malloc: * mmap(size=1420296192) failed (error code=12) error: can't allocate region ** set a breakpoint in malloc_error_break to debug gdb stack crawl at point of internal error: [ 0 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (align_down+0x0) [0x1222d8] [ 1 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (xstrvprintf+0x0) [0x12336c] [ 2 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (xmalloc+0x28) [0x12358f] [ 3 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (dyld_info_read_raw_data+0x50) [0x1659af] [ 4 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (dyld_info_read+0x1bc) [0x168a58] [ 5 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (macosx_dyld_update+0xbf) [0x168c9c] [ 6 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (macosx_solib_add+0x36b) [0x169fcc] [ 7 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (macosx_child_attach+0x478) [0x17dd11] [ 8 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (attach_command+0x5d) [0x64ec5] [ 9 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (mi_cmd_target_attach+0x4c) [0x15dbd] [ 10 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (captured_mi_execute_command+0x16d) [0x17427] [ 11 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (catch_exception+0x41) [0x7a99a] [ 12 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (mi_execute_command+0xa9) [0x16f63] /SourceCache/gdb/gdb-967/src/gdb/utils.c:1144: internal-error: virtual memory exhausted: can't allocate 1420296192 bytes. A problem internal to GDB has been detected, further debugging may prove unreliable. The Debugger has exited with status 1.The Debugger has exited with status 1. Here is the backtrace that I get when I set the breakpoint for malloc_error_break: #0 0x0097a68c in objc_msgSend () #1 0x01785bef in -[UILabel setText:] () #2 0x000030e0 in -[TableViewController tableView:cellForRowAtIndexPath:] (self=0x421d760, _cmd=0x29cfad8, aTableView=0x4819600, indexPath=0x42190f0) at /Volumes/Main2/Enayet/TableViewController.m:99 #3 0x016cee0c in -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] () #4 0x016c6a43 in -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] () #5 0x016d954f in -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow] () #6 0x016d08ff in -[UITableView layoutSubviews] () #7 0x03e672b0 in -[CALayer layoutSublayers] () #8 0x03e6706f in CALayerLayoutIfNeeded () #9 0x03e668c6 in CA::Context::commit_transaction () #10 0x03e6653a in CA::Transaction::commit () #11 0x03e6e838 in CA::Transaction::observer_callback () #12 0x00b00252 in __CFRunLoopDoObservers () #13 0x00aff65f in CFRunLoopRunSpecific () #14 0x00afec48 in CFRunLoopRunInMode () #15 0x00156615 in GSEventRunModal () #16 0x001566da in GSEventRun () #17 0x01689faf in UIApplicationMain () #18 0x00002398 in main (argc=1, argv=0xbfffefb0) at /Volumes/Main2/Enayet/main.m:14

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • Google I/O 2011: JavaScript Programming in the Large with Closure Tools

    Google I/O 2011: JavaScript Programming in the Large with Closure Tools Michael Bolin Most developers who have tinkered with JavaScript could not imagine writing 1000 lines of code in such a language, let alone 100000. Yet that is exactly what Google engineers have done using a suite of JavaScript tools named "Closure" to produce many of the most popular and sophisticated applications on the Web, such as Gmail and Google Maps. From: GoogleDevelopers Views: 4915 35 ratings Time: 57:07 More in Science & Technology

    Read the article

  • Getting null value after adding objects to customClass

    - by Brian Stacks
    Ok here's my code first viewController.h @interface ViewController : UIViewController<UICollectionViewDataSource,UICollectionViewDelegate> { NSMutableArray *twitterObjects; } @property (strong, nonatomic) IBOutlet UICollectionView *myCollectionView; Here is my viewController.m // // ViewController.m // MDF2p2 // // Created by Brian Stacks on 6/5/14. // Copyright (c) 2014 Brian Stacks. All rights reserved. // #import "ViewController.h" // add accounts framework to code #import <Accounts/Accounts.h> // add social frameworks #import <Social/Social.h> #import "TwitterCustomObject.h" #import "CustomCell.h" #import "DetailViewController.h" @interface ViewController () @end @implementation ViewController -(void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender { //CustomCell * cell = (CustomCell*)sender; //NSIndexPath *indexPath = [_myCollectionView indexPathForCell:cell]; // setting an id for view controller DetailViewController *detailViewcontroller = segue.destinationViewController; //TwitterCustomObject *newCustomClass = [twitterObjects objectAtIndex:indexPath.row]; if (detailViewcontroller != nil) { // setting the custom customClass object //detailViewcontroller.myNewCurrentClass = newCustomClass; } } - (void)viewDidLoad { twitterObjects = [[NSMutableArray alloc]init]; [super viewDidLoad]; [self twitterAPIcall]; // Do any additional setup after loading the view, typically from a nib. } - (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section { return 100; } - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath { //UICollectionViewCell *cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"myCell" forIndexPath:indexPath]; // initiate celli CustomCell * cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"myCell" forIndexPath:indexPath]; // add objects to cell if (cell != nil) { //TwitterCustomObject *newCustomClass = [twitterObjects objectAtIndex:indexPath.row]; //[cell refreshCell:newCustomClass.userName userImage:newCustomClass.userImage]; [cell refreshCell:@"Brian" userImage:[UIImage imageNamed:@"love.jpg"]]; } return cell; } -(void)twitterAPIcall { //create an instance of the account store from account frameworks ACAccountStore *accountStore = [[ACAccountStore alloc]init]; // make sure we have a valid object if (accountStore != nil) { // get the account type ex: Twitter, FAcebook info ACAccountType *accountType = [accountStore accountTypeWithAccountTypeIdentifier:ACAccountTypeIdentifierTwitter]; // make sure we have a valid object if (accountType != nil) { // give access to the account iformation [accountStore requestAccessToAccountsWithType:accountType options:nil completion:^(BOOL granted, NSError *error) { if (granted) { //^^^success user gave access to account information // get the info of accounts NSArray *twitterAccounts = [accountStore accountsWithAccountType:accountType]; // make sure we have a valid object if (twitterAccounts != nil) { //NSLog(@"Accounts: %@",twitterAccounts); // get the current account information ACAccount *currentAccount = [twitterAccounts objectAtIndex:0]; // make sure we have a valid object if (currentAccount != nil) { //string from twitter api NSString *requestString = @"https://api.twitter.com/1.1/friends/list.json"; // request the data from the request screen call SLRequest *myRequest = [SLRequest requestForServiceType:SLServiceTypeTwitter requestMethod:SLRequestMethodGET URL:[NSURL URLWithString:requestString] parameters:nil]; // must authenticate request [myRequest setAccount:currentAccount]; // perform the request named myRequest [myRequest performRequestWithHandler:^(NSData *responseData, NSHTTPURLResponse *urlResponse, NSError *error) { // check to make sure there are no errors and we have a good http:request of 200 if ((error == nil) && ([urlResponse statusCode] == 200)) { // make array of dictionaries from the twitter api data using NSJSONSerialization NSArray *twitterFeed = [NSJSONSerialization JSONObjectWithData:responseData options:0 error:nil]; NSMutableArray *nameArray = [twitterFeed valueForKeyPath:@"users"]; // for loop that loops through all the post for (NSInteger i =0; i<[twitterFeed count]; i++) { NSString *nameString = [nameArray valueForKeyPath:@"name"]; NSString *imageString = [nameArray valueForKeyPath:@"profile_image_url"]; NSLog(@"Name feed: %@",nameString); NSLog(@"Image feed: %@",imageString); // get data into my mutable array TwitterCustomObject *twitterInfo = [self createPostFromArray:[nameArray objectAtIndex:i]]; //NSLog(@"Image feed: %@",twitterInfo); if (twitterInfo != nil) { [twitterObjects addObject:twitterInfo]; } } } }]; } } } else { // the user didn't give access UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Warning" message:@"This app will only work with twitter accounts being allowed!." delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil, nil]; [alert performSelectorOnMainThread:@selector(show) withObject:nil waitUntilDone:FALSE]; } }]; } } } -(TwitterCustomObject*)createPostFromArray:(NSArray*)postArray { // create strings to catch the data in NSArray *userArray = [postArray valueForKeyPath:@"users"]; NSString *myUserName = [userArray valueForKeyPath:@"name"]; NSString *twitImageURL = [userArray valueForKeyPath:@"profile_image_url"]; UIImage *image = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:twitImageURL]]]; // initiate object to put the data in TwitterCustomObject *twitterData = [[TwitterCustomObject alloc]initWithPostInfo:myUserName myImage:image]; NSLog(@"Name: %@",myUserName); return twitterData; } -(IBAction)done:(UIStoryboardSegue*)segue { } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } @end Here is my customObject class TwitterCustomClass.h // // TwitterCustomObject.h // MDF2p2 // // Created by Brian Stacks on 6/5/14. // Copyright (c) 2014 Brian Stacks. All rights reserved. // #import <Foundation/Foundation.h> @interface TwitterCustomObject : NSObject { } @property (nonatomic, readonly) NSString *userName; @property (nonatomic, readonly) UIImage *userImage; -(id)initWithPostInfo:(NSString*)screenName myImage:(UIImage*)myImage; @end TwitterCustomClass.m // // TwitterCustomObject.m // MDF2p2 // // Created by Brian Stacks on 6/5/14. // Copyright (c) 2014 Brian Stacks. All rights reserved. // #import "TwitterCustomObject.h" @implementation TwitterCustomObject -(id)initWithPostInfo:(NSString*)screenName myImage:(UIImage*)myImage { // initialize as object if (self = [super init]) { // use the data to be passed back and forth to the tableview _userName = [screenName copy]; _userImage = [myImage copy]; } return self; } @end The problem is I get the values in the method twitterAPIcall, I can get the names and image values or strings from the values. But in the (TwitterCustomObject*)createPostFromArray:(NSArray*)postArray method all values are coming up as null.I thought it got added with this line of code in the twitterAPIcall method [twitterObjects addObject:twitterInfo];?

    Read the article

  • Class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice? UPDATE: Here's another try at explaining it. I want the user to be able to fill out an order (manifest) for a custom optimizer, something like ordering off of a Chinese menu - one from column A, one from column B, etc.. Waiter, from column A (updaters), I'll have the BFGS update with Cholesky-decompositon sauce. From column B (line-searchers), I'll have the cubic interpolation line-search with an eta of 0.4 and a rho of 1e-4, please. Etc... UPDATE: Okay, okay. Here's the playing-around that I've done. I offer it reluctantly, because I suspect it's a completely wrong-headed approach. It runs okay under vc++ 2008. #include <boost/utility.hpp> #include <boost/type_traits/integral_constant.hpp> namespace dj { struct CBFGS { void bar() {printf("CBFGS::bar %d\n", data);} CBFGS(): data(1234){} int data; }; template<class T> struct is_CBFGS: boost::false_type{}; template<> struct is_CBFGS<CBFGS>: boost::true_type{}; struct LMQN {LMQN(): data(54.321){} void bar() {printf("LMQN::bar %lf\n", data);} double data; }; template<class T> struct is_LMQN: boost::false_type{}; template<> struct is_LMQN<LMQN> : boost::true_type{}; struct default_optimizer_traits { typedef CBFGS update_type; }; template<class traits> class Optimizer; template<class traits> void foo(typename boost::enable_if<is_LMQN<typename traits::update_type>, Optimizer<traits> >::type& self) { printf(" LMQN %lf\n", self.data); } template<class traits> void foo(typename boost::enable_if<is_CBFGS<typename traits::update_type>, Optimizer<traits> >::type& self) { printf("CBFGS %d\n", self.data); } template<class traits = default_optimizer_traits> class Optimizer{ friend typename traits::update_type; //friend void dj::foo<traits>(typename Optimizer<traits> & self); // How? public: //void foo(void); // How??? void foo() { dj::foo<traits>(*this); } void bar() { data.bar(); } //protected: // How? typedef typename traits::update_type update_type; update_type data; }; } // namespace dj int main_() { dj::Optimizer<> opt; opt.foo(); opt.bar(); std::getchar(); return 0; }

    Read the article

  • OpenGL - Cascaded shadow mapping - Texture lookup

    - by Silverlan
    I'm trying to implement cascaded shadow mapping in my engine, but I'm somewhat stuck at the last step. For testing purposes I've made sure all cascades encompass my entire scene. The result is currently this: The different intensity of the cascades is not on purpose, it's actually the problem. This is how I do the texture lookup for the shadow maps inside the fragment shader: layout(std140) uniform CSM { vec4 csmFard; // far distances for each cascade mat4 csmVP[4]; // View-Projection Matrix int numCascades; // Number of cascades to use. In this example it's 4. }; uniform sampler2DArrayShadow csmTextureArray; // The 4 shadow maps in vec4 csmPos[4]; // Vertex position in shadow MVP space float GetShadowCoefficient() { int index = numCascades -1; vec4 shadowCoord; for(int i=0;i<numCascades;i++) { if(gl_FragCoord.z < csmFard[i]) { shadowCoord = csmPos[i]; index = i; break; } } shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; return shadow2DArray(csmTextureArray,shadowCoord).x; } I then use the return value and simply multiply it with the diffuse color. That explains the different intensity of the cascades, since I'm grabbing the depth value directly from the texture. I've tried to do a depth comparison instead, but with limited success: [...] // Same code as above shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; float z = shadow2DArray(csmTextureArray,shadowCoord).x; if(z < shadowCoord.w) return 0.25f; return 1.f; } While this does give me the same shadow value everywhere, it only works for the first cascade, all others are blank: (I colored the cascades because otherwise the transitions wouldn't be visible in this case) What am I missing here?

    Read the article

  • Recap: Oracle Fusion Middleware Strategies Driving Business Innovation

    - by Harish Gaur
    Hasan Rizvi, Executive Vice President of Oracle Fusion Middleware & Java took the stage on Tuesday to discuss how Oracle Fusion Middleware helps enable business innovation. Through a series of product demos and customer showcases, Hassan demonstrated how Oracle Fusion Middleware is a complete platform to harness the latest technological innovations (cloud, mobile, social and Fast Data) throughout the application lifecycle. Fig 1: Oracle Fusion Middleware is the foundation of business innovation This Session included 4 demonstrations to illustrate these strategies: 1. Build and deploy native mobile applications using Oracle ADF Mobile 2. Empower business user to model processes, design user interface and have rich mobile experience for process interaction using Oracle BPM Suite PS6. 3. Create collaborative user experience and integrate social sign-on using Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Social Network & Oracle Identity Management 11g R2 4. Deploy and manage business applications on Oracle Exalogic Nike, LA Department of Water & Power and Nintendo joined Hasan on stage to share how their organizations are leveraging Oracle Fusion Middleware to enable business innovation. Managing Performance in the Wrld of Social and Mobile How do you provide predictable scalability and performance for an application that monitors active lifestyle of 8 million users on a daily basis? Nike’s answer is Oracle Coherence, a component of Oracle Fusion Middleware and Oracle Exadata. Fig 2: Oracle Coherence enabled data grid improves performance of Nike+ Digital Sports Platform Nicole Otto, Sr. Director of Consumer Digital Technology discussed the vision of the Nike+ platform, a platform which represents a shift for NIKE from a  "product"  to  a "product +" experience.  There are currently nearly 8 million users in the Nike+ system who are using digitally-enabled Nike+ devices.  Once data from the Nike+ device is transmitted to Nike+ application, users access the Nike+ website or via the Nike mobile applicatoin, seeing metrics around their daily active lifestyle and even engage in socially compelling experiences to compare, compete or collaborate their data with their friends. Nike expects the number of users to grow significantly this year which will drive an explosion of data and potential new experiences. To deal with this challenge, Nike envisioned building a shared platform that would drive a consumer-centric model for the company. Nike built this new platform using Oracle Coherence and Oracle Exadata. Using Coherence, Nike built a data grid tier as a distributed cache, thereby provide low-latency access to most recent and relevant data to consumers. Nicole discussed how Nike+ Digital Sports Platform is unique in the way that it utilizes the Coherence Grid.  Nike takes advantage of Coherence as a traditional cache using both cache-aside and cache-through patterns.  This new tier has enabled Nike to create a horizontally scalable distributed event-driven processing architecture. Current data grid volume is approximately 150,000 request per minute with about 40 million objects at any given time on the grid. Improving Customer Experience Across Multiple Channels Customer experience is on top of every CIO's mind. Customer Experience needs to be consistent and secure across multiple devices consumers may use.  This is the challenge Matt Lampe, CIO of Los Angeles Department of Water & Power (LADWP) was faced with. Despite being the largest utilities company in the country, LADWP had been relying on a 38 year old customer information system for serving its customers. Their prior system  had been unable to keep up with growing customer demands. Last year, LADWP embarked on a journey to improve customer experience for 1.6million LA DWP customers using Oracle WebCenter platform. Figure 3: Multi channel & Multi lingual LADWP.com built using Oracle WebCenter & Oracle Identity Management platform Matt shed light on his efforts to drive customer self-service across 3 dimensions – new website, new IVR platform and new bill payment service. LADWP has built a new portal to increase customer self-service while reducing the transactions via IVR. LADWP's website is powered Oracle WebCenter Portal and is accessible by desktop and mobile devices. By leveraging Oracle WebCenter, LADWP eliminated the need to build, format, and maintain individual mobile applications or websites for different devices. Their entire content is managed using Oracle WebCenter Content and secured using Oracle Identity Management. This new portal automated their paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account -  like Bill Pay, Payment History, Bill History and Usage Analysis. LADWP's solution went live in April 2012. Matt indicated that LADWP's Self-Service Portal has greatly improved customer satisfaction.  In a JD Power Associates website satisfaction survey, results indicate rankings have climbed by 25+ points, marking a remarkable increase in user experience. Bolstering Performance and Simplifying Manageability of Business Applications Ingvar Petursson, Senior Vice Preisdent of IT at Nintendo America joined Hasan on-stage to discuss their choice of Exalogic. Nintendo had significant new requirements coming their way for business systems, both internal and external, in the years to come, especially with new products like the WiiU on the horizon this holiday season. Nintendo needed a platform that could give them performance, availability and ease of management as they deploy business systems. Ingvar selected Engineered Systems for two reasons: 1. High performance  2. Ease of management Figure 4: Nintendo relies on Oracle Exalogic to run ATG eCommerce, Oracle e-Business Suite and several business applications Nintendo made a decision to run their business applications (ATG eCommerce, E-Business Suite) and several Fusion Middleware components on the Exalogic platform. What impressed Ingvar was the "stress” testing results during evaluation. Oracle Exalogic could handle their 3-year load estimates for many functions, which was better than Nintendo expected without any hardware expansion. Faster Processing of Big Data Middleware plays an increasingly important role in Big Data. Last year, we announced at OpenWorld the introduction of Oracle Data Integrator for Hadoop and Oracle Loader for Hadoop which helps in the ability to move, transform, load data to and from Big Data Appliance to Exadata.  This year, we’ve added new capabilities to find, filter, and focus data using Oracle Event Processing. This product can natively integrate with Big Data Appliance or runs standalone. Hasan briefly discussed how NTT Docomo, largest mobile operator in Japan, leverages Oracle Event Processing & Oracle Coherence to process mobile data (from 13 million smartphone users) at a speed of 700K events per second before feeding it Hadoop for distributed processing of big data. Figure 5: Mobile traffic data processing at NTT Docomo with Oracle Event Processing & Oracle Coherence    

    Read the article

  • Microsoft Business Intelligence Seminar 2011

    - by DavidWimbush
    I was lucky enough to attend the maiden presentation of this at Microsoft Reading yesterday. It was pretty gripping stuff not only because of what was said but also because of what could only be hinted at. Here's what I took away from the day. (Disclaimer: I'm not a BI guru, just a reasonably experienced BI developer, so I may have misunderstood or misinterpreted a few things. Particularly when so much of the talk was about the vision and subtle hints of what is coming. Please comment if you think I've got anything wrong. I'm also not going to even try to cover Master Data Services as I struggled to imagine how you would actually use it.) I was a bit worried when I learned that the whole day was going to be presented by one guy but Rafal Lukawiecki is a very engaging speaker. He's going to be presenting this about 20 times around the world over the coming months. If you get a chance to hear him speak, I say go for it. No doubt some of the hints will become clearer as Denali gets closer to RTM. Firstly, things are definitely happening in the SQL Server Reporting and BI world. Traditionally IT would build a data warehouse, then cubes on top of that, and then publish them in a structured and controlled way. But, just as with many IT projects in general, by the time it's finished the business has moved on and the system no longer meets their requirements. This not sustainable and something more agile is needed but there has to be some control. Apparently we're going to be hearing the catchphrase 'Balancing agility with control' a lot. More users want more access to more data. Can they define what they want? Of course not, but they'll recognise it when they see it. It's estimated that only 28% of potential BI users have meaningful access to the data they need, so there is a real pent-up demand. The answer looks like: give them some self-service tools so they can experiment and see what works, and then IT can help to support the results. It's estimated that 32% of Excel users are comfortable with its analysis tools such as pivot tables. It's the power user's preferred tool. Why fight it? That's why PowerPivot is an Excel add-in and that's why they released a Data Mining add-in for it as well. It does appear that the strategy is going to be to use Reporting Services (in SharePoint mode), PowerPivot, and possibly something new (smiles and hints but no details) to create reports and explore data. Everything will be published and managed in SharePoint which gives users the ability to mash-up, share and socialise what they've found out. SharePoint also gives IT tools to understand what people are looking at and where to concentrate effort. If PowerPivot report X becomes widely used, it's time to check that it shows what they think it does and perhaps get it a bit more under central control. There was more SharePoint detail that went slightly over my head regarding where Excel Services and Excel Web Application fit in, the differences between them, and the suggestion that it is likely they will one day become one (but not in the immediate future). That basic pattern is set to be expanded upon by further exploiting Vertipaq (the columnar indexing engine that enables PowerPivot to store and process a lot of data fast and in a small memory footprint) to provide scalability 'from the desktop to the data centre', and some yet to be detailed advances in 'frictionless deployment' (part of which is about making the difference between local and the cloud pretty much irrelevant). Excel looks like becoming Microsoft's primary BI client. It already has: the ability to consume cubes strong visualisation tools slicers (which are part of Excel not PowerPivot) a data mining add-in PowerPivot A major hurdle for self-service BI is presenting the data in a consumable format. You can't just give users PowerPivot and a server with a copy of the OLTP database(s). Building cubes is labour intensive and doesn't always give the user what they need. This is where the BI Semantic Model (BISM) comes in. I gather it's a layer of metadata you define that can combine multiple data sources (and types of data source) into a clear 'interface' that users can work with. It comes with a new query language called DAX. SSAS cubes are unlikely to go away overnight because, with their pre-calculated results, they are still the most efficient way to work with really big data sets. A few other random titbits that came up: Reporting Services is going to get some good new stuff in Denali. Keep an eye on www.projectbotticelli.com for the slides. You can also view last year's seminar sessions which covered a lot of the same ground as far as the overall strategy is concerned. They plan to add more material as Denali's features are publicly exposed. Check out the PASS keynote address for a showing of Yahoo's SQL BI servers. Apparently they wheeled the rack out on stage still plugged in and running! Check out the Excel 2010 Data Mining Add-Ins. 32 bit only at present but 64 bit is on the way. There are lots of data sets, many of them free, at the Windows Azure Marketplace Data Market (where you can also get ESRI shape files). If you haven't already seen it, have a look at the Silverlight Pivot Viewer (http://weblogs.asp.net/scottgu/archive/2010/06/29/silverlight-pivotviewer-now-available.aspx). The Bing Maps Data Connector is worth a look if you're into spatial stuff (http://www.bing.com/community/site_blogs/b/maps/archive/2010/07/13/data-connector-sql-server-2008-spatial-amp-bing-maps.aspx).  

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >