Search Results

Search found 105845 results on 4234 pages for 'asp net dynamic data'.

Page 83/4234 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Building an HTML5 App with ASP.NET

    - by Stephen Walther
    I’m teaching several JavaScript and ASP.NET workshops over the next couple of months (thanks everyone!) and I thought it would be useful for my students to have a really easy to use JavaScript reference. I wanted a simple interactive JavaScript reference and I could not find one so I decided to put together one of my own. I decided to use the latest features of JavaScript, HTML5 and jQuery such as local storage, offline manifests, and jQuery templates. What could be more appropriate than building a JavaScript Reference with JavaScript? You can try out the application by visiting: http://Superexpert.com/JavaScriptReference Because the app takes advantage of several advanced features of HTML5, it won’t work with Internet Explorer 6 (but really, you should stop using that browser). I have tested it with IE 8, Chrome 8, Firefox 3.6, and Safari 5. You can download the source for the JavaScript Reference application at the end of this article. Superexpert JavaScript Reference Let me provide you with a brief walkthrough of the app. When you first open the application, you see the following lookup screen: As you type the name of something from the JavaScript language, matching results are displayed: You can click the details link for any entry to view details for an entry in a modal dialog: Alternatively, you can click on any of the tabs -- Objects, Functions, Properties, Statements, Operators, Comments, or Directives -- to filter results by type of syntax. For example, you might want to see a list of all JavaScript built-in objects: You can login to the application to make modification to the application: After you login, you can add, update, or delete entries in the reference database: HTML5 Local Storage The application takes advantage of HTML5 local storage to store all of the reference entries on the local browser. IE 8, Chrome 8, Firefox 3.6, and Safari 5 all support local storage. When you open the application for the first time, all of the reference entries are transferred to the browser. The data is stored persistently. Even if you shutdown your computer and return to the application many days later, the data does not need to be transferred again. Whenever you open the application, the app checks with the server to see if any of the entries have been updated on the server. If there have been updates, then only the updates are transferred to the browser and the updates are merged with the existing entries in local storage. After the reference database has been transferred to your browser once, only changes are transferred in the future. You get two benefits from using local storage. First, the application loads very fast and works very fast after the data has been loaded once. The application does not query the server whenever you filter or view entries. All of the data is persisted in the browser. Second, you can browse the JavaScript reference even when you are not connected to the Internet (when you are on the proverbial airplane). The JavaScript Reference works as an offline application for browsers that support offline applications (unfortunately, not IE). When using Google Chrome, you can easily view the contents of local storage by selecting Tools, Developer Tools (CTRL-SHIFT I) and selecting Storage, Local Storage: The JavaScript Reference app stores two items in local storage: entriesLastUpdated and entries. HTML5 Offline App For browsers that support HTML5 offline applications – Chrome 8 and Firefox 3.6 but not Internet Explorer – you do not need to be connected to the Internet to use the JavaScript Reference. The JavaScript Reference can execute entirely on your machine just like any other desktop application. When you first open the application with Firefox, you are presented with the following warning: Notice the notification bar that asks whether you want to accept offline content. If you click the Allow button then all of the files (generated ASPX, images, CSS, JavaScript) needed for the JavaScript Reference will be stored on your local computer. Automatic Script Minification and Combination All of the custom JavaScript files are combined and minified automatically whenever the application is built with Visual Studio. All of the custom scripts are contained in a folder named App_Scripts: When you perform a build, the combine.js and combine.debug.js files are generated. The Combine.config file contains the list of files that should be combined (importantly, it specifies the order in which the files should be combined). Here’s the contents of the Combine.config file:   <?xml version="1.0"?> <combine> <scripts> <file path="compat.js" /> <file path="storage.js" /> <file path="serverData.js" /> <file path="entriesHelper.js" /> <file path="authentication.js" /> <file path="default.js" /> </scripts> </combine>   jQuery and jQuery UI The JavaScript Reference application takes heavy advantage of jQuery and jQuery UI. In particular, the application uses jQuery templates to format and display the reference entries. Each of the separate templates is stored in a separate ASP.NET user control in a folder named Templates: The contents of the user controls (and therefore the templates) are combined in the default.aspx page: <!-- Templates --> <user:EntryTemplate runat="server" /> <user:EntryDetailsTemplate runat="server" /> <user:BrowsersTemplate runat="server" /> <user:EditEntryTemplate runat="server" /> <user:EntryDetailsCloudTemplate runat="server" /> When the default.aspx page is requested, all of the templates are retrieved in a single page. WCF Data Services The JavaScript Reference application uses WCF Data Services to retrieve and modify database data. The application exposes a server-side WCF Data Service named EntryService.svc that supports querying, adding, updating, and deleting entries. jQuery Ajax calls are made against the WCF Data Service to perform the database operations from the browser. The OData protocol makes this easy. Authentication is handled on the server with a ChangeInterceptor. Only authenticated users are allowed to update the JavaScript Reference entry database. JavaScript Unit Tests In order to build the JavaScript Reference application, I depended on JavaScript unit tests. I needed the unit tests, in particular, to write the JavaScript merge functions which merge entry change sets from the server with existing entries in browser local storage. In order for unit tests to be useful, they need to run fast. I ran my unit tests after each build. For this reason, I did not want to run the unit tests within the context of a browser. Instead, I ran the unit tests using server-side JavaScript (the Microsoft Script Control). The source code that you can download at the end of this blog entry includes a project named JavaScriptReference.UnitTests that contains all of the JavaScripts unit tests. JavaScript Integration Tests Because not every feature of an application can be tested by unit tests, the JavaScript Reference application also includes integration tests. I wrote the integration tests using Selenium RC in combination with ASP.NET Unit Tests. The Selenium tests run against all of the target browsers for the JavaScript Reference application: IE 8, Chrome 8, Firefox 3.6, and Safari 5. For example, here is the Selenium test that checks whether authenticating with a valid user name and password correctly switches the application to Admin Mode: [TestMethod] [HostType("ASP.NET")] [UrlToTest("http://localhost:26303/JavaScriptReference")] [AspNetDevelopmentServerHost(@"C:\Users\Stephen\Documents\Repos\JavaScriptReference\JavaScriptReference\JavaScriptReference", "/JavaScriptReference")] public void TestValidLogin() { // Run test for each controller foreach (var controller in this.Controllers) { var selenium = controller.Value; var browserName = controller.Key; // Open reference page. selenium.Open("http://localhost:26303/JavaScriptReference/default.aspx"); // Click login button displays login form selenium.Click("btnLogin"); Assert.IsTrue(selenium.IsVisible("loginForm"), "Login form appears after clicking btnLogin"); // Enter user name and password selenium.Type("userName", "Admin"); selenium.Type("password", "secret"); selenium.Click("btnDoLogin"); // Should set adminMode == true selenium.WaitForCondition("selenium.browserbot.getCurrentWindow().adminMode==true", "30000"); } }   The results for running the Selenium tests appear in the Test Results window just like the unit tests: The Selenium tests take much longer to execute than the unit tests. However, they provide test coverage for actual browsers. Furthermore, if you are using Visual Studio ALM, you can run the tests automatically every night as part of your standard nightly build. You can view the Selenium tests by opening the JavaScriptReference.QATests project. Summary I plan to write more detailed blog entries about this application over the next week. I want to discuss each of the features – HTML5 local storage, HTML5 offline apps, jQuery templates, automatic script combining and minification, JavaScript unit tests, Selenium tests -- in more detail. You can download the source control for the JavaScript Reference Application by clicking the following link: Download You need Visual Studio 2010 and ASP.NET 4 to build the application. Before running the JavaScript unit tests, install the Microsoft Script Control. Before running the Selenium tests, start the Selenium server by running the StartSeleniumServer.bat file located in the JavaScriptReference.QATests project.

    Read the article

  • Best way to implement user-powered data validation

    - by vegetables
    I run a product recommendation engine and I'm hitting a few snags. I'm looking to see if anyone has any recommendations on what I should do to minimize these issues. Here's how the site works: Users come to the site and are presented with product recommendations based on some criteria. If a user knows of a product that is not in our system, they can add it by providing the product name and manufacturer. We take that information, and: Hit one API to gather all the product meta-data (and to validate the product spelling, etc). If the product is not in this first API, we do not allow it in our system. Use the information from step 1 to hit another API for pricing information (gathered from many places online). For the sake of discussion, assume that I am searching both APIs in the most efficient/successful manner possible. For the most part, this works very well. I'd say ~80% of our data is perfectly accurate, but there are a few issues: Sometimes the pricing API (Step 2) doesn't have any information for the product. The way the pricing API is built, it will always return something (theoretically, the closest possible match), and there's no guarantee that the product name is spelled exactly the same way in both APIs, so there's no automated way of knowing if it's the right product. When the pricing API finds the right product, occasionally it has outdated, or even invalid pricing data (e.g. if it screen-scraped the wrong price from a website). Since the site was fairly small at first, I was able to manually verify every product that was added to the website. However, the site has grown to the point where this is taking several hours per day, and is just not efficient use of my time. So, my question is: Aside from hiring someone (or getting an intern) to validate all the data manually, what would be the best system of letting my userbase self-manage the data. Specifically, how can I allow users to edit the data while minimizing the risk of someone ambushing my website, or accidentally setting the data incorrectly.

    Read the article

  • data handling with javascript

    - by Vincent Warmerdam
    Python has a very neat package called pandas which allows for quick data transformation; tables, aggregation, that sort of thing. A lot of these types of functionality can also be found in the python itertools module. The plyR package in R is also very similar. Usually one woud use this functionality to produce a table which is later visualized with a plot. I am personally very fond of d3, and I would like to allow the user to first indicate what type of data aggregation he wants on the dataset before it is visualized. The visualisation in question involves making a heatmap where the user gets to select the size of the bins of the heatmap beforehand (I want d3 to project this through leaflet). I want to visually select the ideal size of the bins for the heatmap. The way I work now is that I take the dataset, aggregate it with python and then manually load it in d3. This is a process that takes a lot of human effort and I was wondering if the data aggregation can be done through the javascript of the browser. I couldn't find a package for javascript specifically built for data, suggesting (to me) that this is a bad idea and that one should not use javascript for the data handling. Is there a good module/package for javascript to handle data aggregation? Is it a good/bad idea to do the data aggregation in javascript (performance wise)?

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Test Data in a Distributed System

    - by Davin Tryon
    A question that has been vexing me lately has been about how to effectively test (end-to-end) features in a distributed system. Particuarly, how to effectively manage (through time) test data for feature testing. The system in question is a typical SOA setup. The composition is done in JavaScript when call to several REST APIs. Each service is built as an independent block. Each service has some kind of persistent storage (SQL Server in most cases). The main issue at the moment is how to approach test data when testing end-to-end features. Functional end-to-end testing occurs through the UI, and it is therefore necessary for test data to be set up before the test run (this could be manual or automated testing). As is typical in a distributed system, identifiers from one service are used as a link in another service. So, some level of synchronization needs to be present in the data to effectively test. What is the best way to manage and set up this data after a successful deployment to a test environment? For example, is it better to manage this test data inside each service? Or package it together with the testing suite? Does that testing suite exist as a separate project? I'm interested in design guidance about how to store and manage this test data as the application features evolve.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • How to choose how to store data?

    - by Eldros
    Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime. - Chinese Proverb I could ask what kind of data storage I should use for my actual project, but I want to learn to fish, so I don't need to ask for a fish each time I begin a new project. So, until I used two methods to store data on my non-game project: XML files, and relational databases. I know that there is also other kind of database, of the NoSQL kind. However I wouldn't know if there is more choice available to me, or how to choose in the first place, aside arbitrary picking one. So the question is the following: How should I choose the kind of data storage for a game project? And I would be interested on the following criterion when choosing: The size of the project. The platform targeted by the game. The complexity of the data structure. Added Portability of data amongst many project. Added How often should the data be accessed Added Multiple type of data for a same application Any other point you think is of interest when deciding what to use. EDIT I know about Would it be better to use XML/JSON/Text or a database to store game content?, but thought it didn't address exactly my point. Now if I am wrong, I would gladely be shown the error in my ways.

    Read the article

  • Asp.Net 3.5 Routing to Asmx Webservice?

    - by Maushu
    I was looking for a way to route http://www.example.com/WebService.asmx to http://www.example.com/service/ using only the ASP.NET 3.5 Routing framework without needing to configure the IIS server. Until now I have done what most tutorials told me, added a reference to the routing assembly, configured stuff in the web.config, added this to the Global.asax: protected void Application_Start(object sender, EventArgs e) { RouteCollection routes = RouteTable.Routes; routes.Add( "WebService", new Route("service/{*Action}", new WebServiceRouteHandler()) ); } ...created this class: public class WebServiceRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { // What now? } } ...and the problem is right there, I don't know what to do. The tutorials and guides I've read use routing for pages, not webservices. Is this even possible? Ps: The route handler is working, I can visit /service/ and it throws the NotImplementedException I left in the GetHttpHandler method.

    Read the article

  • Asp.net membership logout automatically

    - by alejandrobog
    Hi, I recently deploy an application that uses asp.net membership (SqlMembershipProvider) and I dont know why but it automatically log out after 1 minute of inactivity. This doesn´t happen on my development environment. I even set the userIsOnlineTimeWindow to 60 which is supposed to be in minutes. Any ideas why this is happening? Im deploying to a virtual directory on a shared hosting environment. Here is how I set up the membership provider <membership defaultProvider="FaceMoviesMembership" userIsOnlineTimeWindow="60"> <providers> <clear/> <add name="FaceMoviesMembership" type="System.Web.Security.SqlMembershipProvider" connectionStringName="FaceMoviesAuthConnectionString" enablePasswordRetrieval="true" enablePasswordReset="true" requiresQuestionAndAnswer="false" maxInvalidPasswordAttempts="10" passwordAttemptWindow="60" requiresUniqueEmail="false" passwordFormat="Clear" applicationName="FaceMoviesWeb" minRequiredPasswordLength="5" minRequiredNonalphanumericCharacters="0"/> </providers>

    Read the article

  • asp.net mvc controller actions testing

    - by Imran
    I was just wondering how others are going about testing controller actions in asp.net mvc? Most of my dependencies are injected in to my controllers so there is a not a huge amount of logic in the action methods but there may be some conditional logic for example which I think is unavoidable. In the past I have written tests for these action methods, mocked the dependencies and tested the results. I have found this is very brittle and a real PITA to maintain. Having 'Expect' and 'Stub' methods everywhere breaks very easily but I don't see any other way of testing controller actions. I actually think it might be easier to test some of these manually! Anyone have any suggestions? Perhaps I am missing something here? Thanks Imran

    Read the article

  • ASP.NET 3.5 Routing?

    - by Maushu
    I was looking for a way to route http://www.example.com/WebService.asmx to http://www.example.com/service/ using only the ASP.NET 3.5 Routing framework without needing to configure the IIS server. Until now I have done what most tutorials told me, added a reference to the routing assembly, configured stuff in the web.config, added this to the Global.asax: protected void Application_Start(object sender, EventArgs e) { RouteCollection routes = RouteTable.Routes; routes.Add( "WebService", new Route("service/{*Action}", new WebServiceRouteHandler()) ); } ...created this class: public class WebServiceRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { // What now? } } ...and the problem is right there, I don't know what to do. The tutorials and guides I've read use routing for pages, not webservices. Is this even possible? Ps: The route handler is working, I can visit /service/ and it throws the NotImplementedException I left in the GetHttpHandler method.

    Read the article

  • CompliationLock throws HttpException when registering areas for ASP.NET MVC unit tests

    - by patridge
    The moment I added a unit test to my ASP.NET MVC application to test some of the area routing, I got an HttpException coming out of the System.Web.Complication.CompilationLock type initializer with the following stack trace. System.Web.HttpException : The type initializer for 'System.Web.Compilation.CompilationLock' threw an exception. ----> System.TypeInitializationException : The type initializer for 'System.Web.Compilation.CompilationLock' threw an exception. ----> System.NullReferenceException : Object reference not set to an instance of an object. at System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() at System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() at System.Web.Compilation.BuildManager.GetReferencedAssemblies() at System.Web.Mvc.BuildManagerWrapper.System.Web.Mvc.IBuildManager.GetReferencedAssemblies() at System.Web.Mvc.TypeCacheUtil.FilterTypesInAssemblies(IBuildManager buildManager, Predicate`1 predicate) at System.Web.Mvc.TypeCacheUtil.GetFilteredTypesFromAssemblies(String cacheName, Predicate`1 predicate, IBuildManager buildManager) at System.Web.Mvc.AreaRegistration.RegisterAllAreas(RouteCollection routes, IBuildManager buildManager, Object state) at System.Web.Mvc.AreaRegistration.RegisterAllAreas(Object state) at System.Web.Mvc.AreaRegistration.RegisterAllAreas() at StpWeb.MvcApplication.RegisterRoutes(RouteCollection routes) in Global.asax.cs: line 16 at StpWeb.Tests.RoutesTest.TestFixtureSetUp() in RoutesTest.cs: line 11 --TypeInitializationException at System.Web.Compilation.CompilationLock.GetLock(ref Boolean gotLock) at System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() --NullReferenceException at System.Web.Compilation.CompilationLock..cctor()

    Read the article

  • asp.net mvc create a c# class in an ascx file for design purposes

    - by Julian
    Hi, I am developing a web application using asp.net mvc. I have come across the need to Temporarily create a class in the ascx/aspx file. This class will replace the Model during the development of the page. It will also hold some test data for the user to have the chance to see some results. Once we are happy with the layout on the screen, I will inherit the correct Model class through the Control tag. Can you please advise if this is possible and how to do it? This does not work: <% class Modelo { public Guid Guid { get; set; } public string Name { get; set; } } %> Thanks in advance, Be happy - Julian

    Read the article

  • How to dispose NHibernate ISession in an ASP.NET MVC App

    - by Joe Young
    I have NHibernate hooked up in my asp.net mvc app. Everything works fine, if I DON'T dispose the ISession. I have read however that you should dispose, but when I do, I get random "Session is closed" exceptions. I am injecting the ISession into my other objects with Windsor. Here is my current NHModule: public class NHibernateHttpModule : IHttpModule { public void Init(HttpApplication context) { context.BeginRequest += context_BeginRequest; context.EndRequest += context_EndRequest; } static void context_EndRequest(object sender, EventArgs e) { CurrentSessionContext.Unbind(MvcApplication.SessionFactory); } static void context_BeginRequest(object sender, EventArgs e) { CurrentSessionContext.Bind(MvcApplication.SessionFactory.OpenSession()); } public void Dispose() { // do nothing } } Registering the ISession: container .Register(Component.For<ISession>() .UsingFactoryMethod(() => MvcApplication.SessionFactory.GetCurrentSession()).LifeStyle.Transient); The error happens when I tack the Dispose on the unbind in the module. Since I keep getting the session is closed error I assume this is not the correct way to do this, so what is the correct way? Thanks, Joe

    Read the article

  • ASP.NET authentication login and logout with browser back button

    - by Eatdoku
    Hi, I am looking for a solution for user use the browser's back button to navigate to previous page once logged out. I have a web application build in asp.net and using a custom membership provider for authentication and authorization. Everything works fine except when the user click on the logout link to log out of the application and being redirect to a default cover page, if the use click on the BACK BUTTON on their browser, it will actually go back to where they were before and the data will still show up. Of course they can't do anything on that page, click on anything link they will be redirect to a login page again. But having those information display is making a lot users confused. i am just wondering if there is any way i can either clear the browser's history so use can't go BACK, or when they click on the back button and have them redirect to the login page. thanks

    Read the article

  • Asp.Net Program Architecture

    - by Pino
    I've just taken on a new Asp.Net MVC application and after opening it up I find the following, [Project].Web [Project].Models [Project].BLL [Project].DAL Now, something thats become clear is that there is the data has to do a hell of a lot before it makes it to the View (DatabaseDALRepoBLLConvertToModelControllerView). The DAL is Subsonic, the repositorys in the DAL return the subsonic entities to the BLL which process them does crazy things and converts them into a Model (From the .Models) sometimes with classes that look like this public DataModel GetDataModel(EntityObject Src) { var ReturnData = new DataModel(): ReturnData.ID = Src.ID; ReturnDate.Name = Src.Name; //etc etc } Now, the question is, "Is this complete overkill"? Ok the project is of a decent size and can only get bigger but is it worth carrying on with all this? I dont want to use AutoMapper as it just seems like it makes the complication worse. Can anyone shed any light on this?

    Read the article

  • Determine the colours used by the ASP.NET Chart control

    - by Moose Factory
    I'd like to find out which colours are used for a particular pallette in the ASP.NET Chart control. I already know there is an enum on the Chart class to set the palette, e.g. myChart.Palette = ChartColorPalette.Berry; But I'd like to know which colours belong to the palette. Before anyone asks - as I know you will - the reason I need to know the colours is because I want to create my own legend outside of the chart image. I also know that I can set my own colours on the DataPoints for the chart, but I'd rather not have to implement my own palette.

    Read the article

  • Avoiding the Controller with Routing Rules in ASP.NET MVC

    - by Ryan Elkins
    I've created a website with ASP.NET MVC. I have a number of static pages that I am currently serving through a single controller called Home. This creates some rather ugly URLs. example.com/Home/About example.com/Home/ContactUs example.com/Home/Features You get the idea. I'd rather not have to create a controller for each one of these as the actions simply call the View with no model being passed in. Is there a way to write a routing rule that will remove the controller from the URL? I'd like it to look like: example.com/About example.com/ContactUs example.com/Features If not, how is this situation normally handled? I imagine I'm not the first person to run in to this.

    Read the article

  • Storing Interface type in ASP.NET Profile

    - by NathanD
    In my ASP.NET website all calls into the data layer return entities as interfaces and the website does not need to know what the concrete type is. This works fine, but I have run into a problem when trying to store one of those types into the user Profile. My interface implements ISerializable, like the following: public interface IInsured : IPerson, IEntity, ISerializable and the concrete type in the datalayer does implement ISerializable. My Profile property in web.config is: <add name="ActiveInsured" type="FacadeInterfaces.IInsured" serializeAs="Binary" defaultValue="[null]"/> It compiles just fine, but I get a runtime error on Profile.Save() saying that the interface cannot be serialized. I have also tried it with serializeAs="Xml". I thought that if my interface implemented ISerializable it would work. Anybody had this problem before or know of a workaround?

    Read the article

  • ASP Mail Error: The event class for this subscription is in an invalid partition

    - by JFV
    I have some ASP code that I've "inherited" from my predecessor (no, it's not an option to update it at this time...It would take an act of not only Congress, but every other foreign country too) and I'm having an issue sending mail on one of the pages. It is an almost identical code snippet from the other page, but this one throws an error when I try to 'Send'. Code below: Set myMail=CreateObject("CDO.Message") myMail.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/sendusing")=2 'Name or IP of remote SMTP server myMail.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserver")="localhost" 'Server port myMail.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserverport")=25 myMail.Configuration.Fields.Update myMail.Subject="Subject" myMail.From=from_email myMail.To=email myMail.TextBody= "Body Text of message" myMail.Send The error thrown is: Error Type: (0x8004020F) The event class for this subscription is in an invalid partition I'd appreciate any and all help!!! Thanks! JFV

    Read the article

  • ASP.NET MVC 2 Areas 404

    - by Justin
    Hey, Has anyone been able to get the Areas in ASP.NET MVC 2 to work? I created a new Area called "Secure" and placed a new controller in it named HomeController. I then Created a new Home/Index.aspx view. However, when I browse to http://localhost/myapp/Secure/ it gives a 404 resource cannot be found. http://localhost/myapp/Secure/Home gives the same error. My area registration looks like this: public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Secure_default", "Secure/{controller}/{action}/{id}", new { action = "Index", id = UrlParameter.Optional } ); } I also tried this: public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute( "Secure_default", "Secure/{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } Thanks, Justin

    Read the article

  • ASP.NET MVC 2 - Account controller not found

    - by Chris
    Hi all, I've recently created an ASP.NET MVC 2 application, which works perfectly in the development environment. However, when I deploy it to the server (123-reg Premium Hosting), I can access all of the expected areas - except the Account controller (www.host.info/Account). This then attempts to redirect to the Error.aspx page (www.host.info/Shared/Error.aspx) which it cannot find. I've checked that all of the views have been published, and they're all in the correct place. It seems bizarre that two other controllers can be accessed with no problems, whereas the Account controller cannot be found. I have since renamed the AccountController to SecureController, and all of the dependencies, to no avail. The problem with not being able to find the Error.aspx page also occurs on the development environment. Any ideas would be greatly appreciated. Thanks, Chris

    Read the article

  • ASP.NET MVC 2: Custom client side validation rule for dropdowns

    - by Nigel
    I have some custom validation that I need to perform that involves checking the selected option of a dropdown and flagging it as invalid should the user select a specific option. I am using ASP.NET MVC 2 and have a custom Validator and custom server side and client side validation rules as described in this blog article. The server side validation is working fine, however, the client side validation is failing. Here is the javascript validation rule: Sys.Mvc.ValidatorRegistry.validators["badValue"] = function(rule) { var badValue = rule.ValidationParameters["badValue"]; return function(value, context) { if (value != badValue) { return true; } return rule.ErrorMessage; }; }; The rule is being applied to the dropdowns successfully, and placing a breakpoint within the returned function confirms that the validation is firing, and that the 'badValue' is being correctly set. However, 'value' is always null, and so the check always fails. What am I doing wrong?

    Read the article

  • Personalization in ASP.Net MVC -- friendly URLs, and skinning

    - by larryq
    Hi everyone, I haven't delved into custom generation of friendly URLs in ASP.Net MVC, and was wondering if anyone had suggestions. For example, if John Smith were to create an account on www.example.com, I'd like for his homepage to read www.example.com/JohnSmith -- along with the option for him to choose his URL. The ideal is for this to happen with no intervention on my part in the route maps. Also, does anyone have guidelines on good ways to go to customize an MVC site based on URL? Again, using example.com I'd like for John to choose a color theme and logo for his homepage, then apply it accordingly. Thanks for your tips and suggestions.

    Read the article

  • how big should your controllers be in asp.net-mvc

    - by ooo
    i see the new feature of areas in asp.net-mvc 2. it got me thinking. why would i need this? i did some reading on the use cases and it came down to a specific point to me around how big and how broad scope should my controllers should be? should i try to have many little controllers? one big controller? how do people determine the sweet spot for number of controllers? i think mine are maybe too large (which had me questioning areas in the first place as maybe my controller name should really be an area and have a number of smaller controllers)

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >