Search Results

Search found 55886 results on 2236 pages for 'advanced net'.

Page 79/2236 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • ViewContext.RouteData.Values["action"] is null on server... works fine on local machine

    - by rksprst
    I'm having a weird issue where ViewContext.RouteData.Values["action"] is null on my staging server, but works fine on my dev machine (asp.net development server). The code is simple: public string CheckActiveClass(string actionName) { string text = ""; if (ViewContext.RouteData.Values["action"].ToString() == actionName) { text = "selected"; } return text; } I get the error on the ViewContext.RouteData.Values["action"] line. The error is: Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Any help is appreciated. Thanks in advance.

    Read the article

  • How to mask tilde (~) character in C# MVC routing table?

    - by AC
    I'm moving my home-baked web site to MVC and got the trouble with url routing. The site already serves several links that contain tilde (~) character in the path; something like http://.../~files/... http://.../~ws/... and I want each of them are handled by separate controller, like filesController, wsController, so my route table looks like routes.MapRoute( "files", "~files/{*prms}", new { controller = "files", action = "index", prms = "" } ); routes.MapRoute( "ws", "~ws/{*prms}", new { controller = "ws", action = "index", prms = "" } ); ... but when I try to get the result I got the error saying "The route URL cannot start with a '/' or '~' character and it cannot contain a '?' character." As I understand those characters have the special meaning in ASP.net but is it possible to mask them somehow, at least tilde? Should I parse and route requests like this myself? What the best practice to handle urls like this? Thanks!

    Read the article

  • ASP .NET: SQL Server Money Type and .NET Currency Type

    - by Rudi Ramey
    MS SQL Server's Money Data Type seems to accept a well formatted currency value with no problem (example: $52,334.50) From my research MS SQL Sever just ignores the "$" and "," characters. ASP .NET has a parameter object that has a Type/DbType property and Currency is an available option to set as a value. However, when I set the parameter Type or DbType to currency it will not accept a value like $52,334.50. I receive an error "Input string was not in a correct format." when I try to Update/Insert. If I don't include the "$" or "," characters it seems to work fine. Also, if I don't specify the Type or DbType for the parameter it seems to work fine also. Is this just standard behavior that the parameter object with its Type set to currency will still reject "$" and "," characters in ASP .NET? Here's an example of the parameter declaration (in the .aspx page): <asp:Parameter Name="ImplementCost" DbType="Currency" />

    Read the article

  • Dissecting ASP.NET Routing

    The ASP.NET Routing framework allows developers to decouple the URL of a resource from the physical file on the web server. Specifically, the developer defines routing rules, which map URL patterns to a class or ASP.NET page that generates the content. For instance, you could create a URL pattern of the form Categories/CategoryName and map it to the ASP.NET page ShowCategoryDetails.aspx; the ShowCategoryDetails.aspx page would display details about the category CategoryName. With such a mapping, users could view category about the Beverages category by visiting www.yoursite.com/Categories/Beverages. In short, ASP.NET Routing allows for readable, SEO-friendly URLs. ASP.NET Routing was first introduced in ASP.NET 3.5 SP1 and was enhanced further in ASP.NET 4.0. ASP.NET Routing is a key component of ASP.NET MVC, but can also be used with Web Forms. Two previous articles here on 4Guys showed how to get started using ASP.NET Routing: Using ASP.NET Routing Without ASP.NET MVC and URL Routing in ASP.NET 4.0. This article aims to explore ASP.NET Routing in greater depth. We'll explore how ASP.NET Routing works underneath the covers to decode a URL pattern and hand it off the the appropriate class or ASP.NET page. Read on to learn more! Read More >

    Read the article

  • Forms Authentication and Login controls

    - by DotnetDude
    I am upgrading a website written using ASP.NET 1.1 and the logic for the login page includes verifying the credentials, calling FormsAuthentication.SetAuthCookie() and populating the Session with the user information. I am updating this page to use Login controls and the Membership API and am trying to wrap my head around the concepts that have been changed. Most of the samples I see do not do anything on the login button event handler, so is the logic of setting the cookie abstracted out into the control? Also, how do I check if a user is logged in or not on other pages. Does it still store user information using the Session? How do I check if a user belongs to a particular role or not (Earlier, I would look in the Session object to do something like this) Is the Session a bad way of storing user info? Thanks

    Read the article

  • MVC2: Best Way to Intercept ViewRequest and Alter ActionResult

    - by Matthew
    I'm building an ASP.NET MVC2 Web Application that requires some sophisticated authentication and business logic that cannot be achieved using the out of the box forms authentication. I'm new to MVC so bear with me... My plan was to mark all restricted View methods with one or more custom attributes (that contain additional data). The controller would then override the OnActionExecuting method to intercept requests, analyze the target view's attributes, and do a variety of different things, including re-routing the user to different places. I have the interception and attribute analysis working, but the redirection is not working as expected. I have tried setting the ActionExecutingContext.Result to null and even have tried spooling up controllers via reflection and invoking their action methods. No dice. I was able to achieve it this way... protected override void OnActionExecuting(ActionExecutingContext filterContext) { filterContext.HttpContext.Response.Redirect("/MyView", false); base.OnActionExecuting(filterContext); } This seems like a hack, and there has to be a better way...

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • Using RouteExistingFiles to block access to existing files even if no route exists

    - by Michael Stum
    In ASP.net MVC 2, I can use routes.RouteExistingFiles = true; to send all requests through the routing system, even if they exist on the file system. Usually, this ends up hitting the "{controller}/{action}/{id}" route and throws an exception as the controller cannot be found. I do not want to use that route though (I have only a few URLs and they are specifically mapped), yet I would still like to prevent access to the file system. Basically I want to Whitelist pages using IgnoreRoute. Is there a built-in way to do this? My current approach is to still have the "{controller}/{action}/{id}" route and generate a 404 when this is hit, but I'm just wondering if something is built-in already?

    Read the article

  • Is using jquery to call a WCF Data Service from the UI violating the MVC pattern.

    - by Lee Dale
    I'm fairly new to ASP.Net MVC 2 and understand the MVC pattern in itself. But my question is what's the best way to populate dropdownlists in the UI sticking to the MVC pattern. Should I be going through the controller? Every article I've seen to do this shows how to do it using javascript and jquery. I have a test application that I'm re-writing in MVC2 I have my dropdowns working with jquery basically calling a WCF Data Service that returns JSON which populates the dropdowns. Seems to me though that this is bypassing the controller and going straight to the model therefore strictly violating the MVC pattern. Or am I missing something obvious here. You thoughts or best practices would be greatly welcome here. Thanks

    Read the article

  • Asp.Net MVC and ajax async callback execution order

    - by lrb
    I have been sorting through this issue all day and hope someone can help pinpoint my problem. I have created a "asynchronous progress callback" type functionality in my app using ajax. When I strip the functionality out into a test application I get the desired results. See image below: Desired Functionality When I tie the functionality into my single page application using the same code I get a sort of blocking issue where all requests are responded to only after the last task has completed. In the test app above all request are responded to in order. The server reports a ("pending") state for all requests until the controller method has completed. Can anyone give me a hint as to what could cause the change in behavior? Not Desired Desired Fiddler Request/Response GET http://localhost:12028/task/status?_=1383333945335 HTTP/1.1 X-ProgressBar-TaskId: 892183768 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:12028/ Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:12028 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 3.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcVEVNUFxQcm9ncmVzc0Jhclx0YXNrXHN0YXR1cw==?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:39:08 GMT Content-Length: 25 Iteration completed... Not Desired Fiddler Request/Response GET http://localhost:60171/_Test/status?_=1383341766884 HTTP/1.1 X-ProgressBar-TaskId: 838217998 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:60171/Report/Index Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:60171 Pragma: no-cache Cookie: ASP.NET_SessionId=rjli2jb0wyjrgxjqjsicdhdi; AspxAutoDetectCookieSupport=1; TTREPORTS_1_0=CC2A501EF499F9F...; __RequestVerificationToken=6klOoK6lSXR51zCVaDNhuaF6Blual0l8_JH1QTW9W6L-3LroNbyi6WvN6qiqv-PjqpCy7oEmNnAd9s0UONASmBQhUu8aechFYq7EXKzu7WSybObivq46djrE1lvkm6hNXgeLNLYmV0ORmGJeLWDyvA2 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 4.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcSUxlYXJuLlJlcG9ydHMuV2ViXHRydW5rXElMZWFybi5SZXBvcnRzLldlYlxfVGVzdFxzdGF0dXM=?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:37:48 GMT Content-Length: 25 Iteration completed... The only difference in the two requests headers besides the auth tokens is "Pragma: no-cache" in the request and the asp.net version in the response. Thanks Update - Code posted (I probably need to indicate this code originated from an article by Dino Esposito ) var ilProgressWorker = function () { var that = {}; that._xhr = null; that._taskId = 0; that._timerId = 0; that._progressUrl = ""; that._abortUrl = ""; that._interval = 500; that._userDefinedProgressCallback = null; that._taskCompletedCallback = null; that._taskAbortedCallback = null; that.createTaskId = function () { var _minNumber = 100, _maxNumber = 1000000000; return _minNumber + Math.floor(Math.random() * _maxNumber); }; // Set progress callback that.callback = function (userCallback, completedCallback, abortedCallback) { that._userDefinedProgressCallback = userCallback; that._taskCompletedCallback = completedCallback; that._taskAbortedCallback = abortedCallback; return this; }; // Set frequency of refresh that.setInterval = function (interval) { that._interval = interval; return this; }; // Abort the operation that.abort = function () { // if (_xhr !== null) // _xhr.abort(); if (that._abortUrl != null && that._abortUrl != "") { $.ajax({ url: that._abortUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId } }); } }; // INTERNAL FUNCTION that._internalProgressCallback = function () { that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); $.ajax({ url: that._progressUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, success: function (status) { if (that._userDefinedProgressCallback != null) that._userDefinedProgressCallback(status); }, complete: function (data) { var i=0; }, }); }; // Invoke the URL and monitor its progress that.start = function (url, progressUrl, abortUrl) { that._taskId = that.createTaskId(); that._progressUrl = progressUrl; that._abortUrl = abortUrl; // Place the Ajax call _xhr = $.ajax({ url: url, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, complete: function () { if (_xhr.status != 0) return; if (that._taskAbortedCallback != null) that._taskAbortedCallback(); that.end(); }, success: function (data) { if (that._taskCompletedCallback != null) that._taskCompletedCallback(data); that.end(); } }); // Start the progress callback (if any) if (that._userDefinedProgressCallback == null || that._progressUrl === "") return this; that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); }; // Finalize the task that.end = function () { that._taskId = 0; window.clearTimeout(that._timerId); } return that; };

    Read the article

  • Add client side javascript code and ASP.Net validation on a asp.net button

    - by Vinni
    Hello guys, I wanted to write javascript code on "OnClientClick" of the asp.net button and also I want the asp.net validation to be run for that button. but when i mix these both validation is not working. please help me out. Below is my code ASPX <asp:Button ID="btnAddToFeatureOffers" runat="server" Text="Add to Feature Offers" OnClick="btnAddToFeatureOffers_Click" ValidationGroup="vgAddOffer" OnClientClick="add();" /> javascript function add() { var selectedOrder = $('#ctl00_MainContent_ddlFeaturedHostingType option:selected')[0].index; var offer = $('#<%=txtOrder.ClientID%>').val(); var a = $("<a>").attr("href", "#").addClass("offer").text("X"); $("<div>").text(offer).append(a).appendTo($('#resultTable #resultRow td')[selectedOrder - 1]); }

    Read the article

  • ResolveURL not resolving in a user control

    - by WebJunk
    I'm trying to use ResolveUrl() to set some paths in the code behind of a custom ASP.NET user control. The user control contains a navigation menu. I'm loading it on a page that's loading a master page. When I call ResolveUrl("~") in my user control it returns "~" instead of the root of the site. When I call it in a page I get the root path as expected. I've stepped through with the debugger and confirmed, ResolveUrl("~") returns "~" in my user control code behind. Is there some other way I should be calling the function in my user control code behind to get the root path of the site?

    Read the article

  • ASP.NET MVC + MySql Membership Provider, user cannot login

    - by Jason Miesionczek
    Hello, I've been playing around with using MySql as the membership provider for asp.net mvc forms authentication. I've got things configured correctly as far as i can tell, and i can create users via both the register action and asp.net web config site. however, when i try to login with one of the users, it does not work. it returns an error as if i had entered a wrong password, or if the account doesn't exist. i have verified in the database that the account does exist. I've followed the instructions here for reference: http://blog.tchami.com/post/ASPNET-MVC-2-and-MySQL-Membership-Provider.aspx here is my web.config: <?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=152368 --> <configuration> <connectionStrings> <add name="MySQLConn" connectionString="Server=localhost;Database=intereditor;Uid=<user>;Pwd=<password>;"/> </connectionStrings> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" name=".ASPXFORM$" path="/" requireSSL="false" slidingExpiration="true" enableCrossAppRedirects="false" /> </authentication> <membership defaultProvider="MySqlMembershipProvider"> <providers> <clear/> <add name="MySqlMembershipProvider" type="MySql.Web.Security.MySQLMembershipProvider,MySql.Web,Version=6.3.4.0, Culture=neutral,PublicKeyToken=c5687fc88969c44d" autogenerateschema="true" connectionStringName="MySQLConn" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" passwordStrengthRegularExpression="" applicationName="/" /> </providers> </membership> <profile defaultProvider="MySqlProfileProvider"> <providers> <clear/> <add name="MySqlProfileProvider" type="MySql.Web.Profile.MySQLProfileProvider,MySql.Web,Version=6.3.4.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" connectionStringName="MySQLConn" applicationName="/" /> </providers> </profile> <roleManager enabled="true" defaultProvider="MySqlRoleProvider"> <providers> <clear /> <add name="MySqlRoleProvider" type="MySql.Web.Security.MySQLRoleProvider,MySql.Web,Version=6.3.4.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" connectionStringName="MySQLConn" applicationName="/" /> </providers> </roleManager> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration> Can anyone please help me identify what is wrong so that users can login? UPDATE So after debugging the login process in the code of the membership provider itself, i discovered that there is a bug in the provider. There is a discrepancy between the password hash that is stored in the database, and the has that is generated based on the inputted password. As a workaround for my issue, i changed the password format to 'encrpyted' and added a machine key to my web.config. I am still interested in figuring out the issue with the hashed format in the provider, and will spend some more time debugging it, and if i can figure out the problem, i will put together a patch and submit it.

    Read the article

  • which collection should I use

    - by Masna
    Hello, I have a number of custom objects of type X. X has a number of parameters and must be unique in the collection. (I created my own equals method based on the custom parameters to examine this) In each object of type x, I have a list of objects y. I want to add/remove/modify easily an object y. For example: To write the add method, it would be something like add(objTypeX, objTypeY) I would check or the collections already has a objTypeX. If so: i would add the objTypeY to the already existing objTypeX else: i would create objTypeX and add objTypeY to this object. To modify an objTypeY, it would be something like(objTypeX, objTypeY, newobjTypeY) I would get objTypeX out of the collections and modify objTypeY to newobjTypeY Which collections should I use? I tried with hashset but i can get a specific object out of the list, without run down the list till I find that object. I develop this in vb.net 3.5

    Read the article

  • Using ScriptCombining through a ScriptManager on a Master Page

    - by Hmobius
    ASP.NET 3.5 SP1 adds a great new ScriptCombining feature to the ScriptManager object as demonstrated on this video. However he only demonstrates how to use the feature with the ScriptManager on the same page. I'd like to use this feature on a site where the scriptmanager is on the master page but can't figure out how to add the scripts I need for each page programmatically to the manager. I've found this post to use as a starting point, but I'm not really getting very far. can anyone give me a helping hand? Thanks, Dan

    Read the article

  • Advanced ASP.NET Gridview Layout

    - by chief7
    So i had a feature request to add fields to a second table row for a single data row on a GridView. At first, I looked at extending the functionality of the GridView but soon realized this would be a huge task and since I consider this request a shim for a larger future feature decided against it. Also want to move to MVC in the near future and this would be throw away code. So instead I created a little jquery script to move the cell to the next row in the table. $(document).ready(function() { $(".fieldAttributesNextRow").each(function() { var parent = $(this).parent(); var newRow = $("<tr></tr>"); newRow.attr("class", $(parent).attr("class")); var headerRow = $(parent).parent().find(":first"); var cellCount = headerRow.children().length - headerRow.children().find(".hide").length; newRow.append($(this).attr("colspan", cellCount)); $(parent).after(newRow); }) }); What do you think of this? Is this a poor design decision? I am actually quite pleased with the ease of this solution. Please provide your thoughts.

    Read the article

  • Advanced tasks using Web.Config transformation

    - by dcadenas
    Does anyone know if there is a way to "transform" specific sections of values instead of replacing the whole value or an attribute? For example, I've got several appSettings entries that specify the Urls for different webservices. These entries are slightly different in the dev environment than the production environment. Some are less trivial than others <!-- DEV ENTRY --> <appSettings> <add key="serviceName1_WebsService_Url" value="http://wsServiceName1.dev.domain.com/v1.2.3.4/entryPoint.asmx" /> <add key="serviceName2_WebsService_Url" value="http://ma1-lab.lab1.domain.com/v1.2.3.4/entryPoint.asmx" /> </appSettings> <!-- PROD ENTRY --> <appSettings> <add key="serviceName1_WebsService_Url" value="http://wsServiceName1.prod.domain.com/v1.2.3.4/entryPoint.asmx" /> <add key="serviceName2_WebsService_Url" value="http://ws.ServiceName2.domain.com/v1.2.3.4/entryPoint.asmx" /> </appSettings> So far, I know I can do something like this in the Web.Release.Config: <add xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)" key="serviceName1_WebsService_Url" value="http://wsServiceName1.prod.domain.com/v1.2.3.4/entryPoint.asmx" /> <add xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)" key="serviceName2_WebsService_Url" value="http://ws.ServiceName2.domain.com/v1.2.3.4/entryPoint.asmx" /> However, everytime the Version for that webservice is updated, I would have to update the Web.Release.Config as well, which defeats the purpose of simplfying my web.config updates. I know I could also split that URL into different sections and update them independently, but I rather have it all in one key. I've looked through the available web.config Transforms but nothings seems to be geared towars what I am trying to accomplish. These are the websites I am using as a reference: Vishal Joshi's blog, MSDN Help, and Channel9 video Any help would be much appreciated! -D

    Read the article

  • Advanced Where Statements in Linq to Entity Framework

    - by JimJams
    Hi, I am wanting to create a Where statement within my Linq statement, but have hit a bit of a stumbling block. I would like to split a string value, and then search using each array item in the Where clause. In my normal Sql statement I would simply loop through the string array, and build up there Where clause then either pass this to a stored procedure, or just execute the sql string. But am not sure how to do this with Linq to Entity? ( From o In db.TableName Where o.Field LIKE Stringvalue Select o ).ToList() Hope you can help. Thanks in advance!

    Read the article

  • ObjectDataSource Insert and Update methods error

    - by Jack
    I m developing asp.net 3.5 project. When I want to Insert with DetailsView this error occured: Error : ObjectDataSource 'ObjectDataSource2' could not find a non-generic method 'AddCity' that has parameters: CITY_NAME. <asp:ObjectDataSource ID="ObjectDataSource2" runat="server" SelectMethod="GetCityByID" UpdateMethod="UpdateCity" InsertMethod="AddCity" TypeName="NOP_CRM.Lib.nop_cities" OldValuesParameterFormatString="original_{0}"> <SelectParameters> <asp:ControlParameter ControlID="GridView1" Name="cityid" PropertyName="SelectedValue" Type="Int32" DefaultValue="1" /> </SelectParameters> <UpdateParameters> <asp:Parameter Name="CITY_NAME" Type="String" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="CITY_NAME" Type="String" /> </InsertParameters> </asp:ObjectDataSource> ... public int AddCity(string cityname) { CITY_NAME = cityname; Insert(); return _CITY_ID; }

    Read the article

  • Which Javascript history back implementation is the best?

    - by Malcolm Frexner
    There are implementations for history.back in Micrososft AJAX and jQuery (http://www.asual.com/jquery/address/). I already have jQuery and asp.net ajax included in my project but I am not sure which implementation of history.back is better. Better for me is: Already used by some large projects Wide browser support Easy to implement Little footprint Does anybody know which one is better? EDIT: Another jquery plugin is http://plugins.jquery.com/project/history It is recommmended in the book JQuery Cookbook. This one worked well so far.

    Read the article

  • How to define using statements in web.config?

    - by Hasan Gürsoy
    I'm using MySql in my asp.net project. But I don't want to type every "using MySql.Data.MySqlClient;" statement in every aspx.cs file. How can I define this lines in web.config file? I've defined some namespaces like below but this only works for aspx pages: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="false" targetFramework="4.0"/> <pages> <namespaces> <add namespace="System.Web.Configuration"/> <add namespace="MySql.Data"/> <add namespace="MySql.Data.MySqlClient"/> </namespaces> </pages> </system.web> </configuration>

    Read the article

  • Why doesnt doesnt HTML input of type file not work with Ajax update panel

    - by Sean P
    I have a input of type file and when i try to do a Request.files when the input is wrapped in an update panel...it always returns an empty httpfilecollection. Why??? This is the codebehind: (At HttpContext.Current.Request.Files...its always 0 for the count.) Protected Sub btnSubmit_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnSubmit.Click Dim uploads As HttpFileCollection uploads = HttpContext.Current.Request.Files For i As Integer = 0 To (uploads.Count - 1) If (uploads(i).ContentLength > 0) Then Dim c As String = System.IO.Path.GetFileName(uploads(i).FileName) Try uploads(i).SaveAs("C:\UploadedUserFiles\" + c) Span1.InnerHtml = "File Uploaded Sucessfully." Catch Exp As Exception Span1.InnerHtml = "Some Error occured." End Try End If Next i End Sub This example comes from the ASP.Net website...but my application is very similar.

    Read the article

  • Why would this line throw exception for type initializer failed?

    - by Jaggu
    I had a class: public class Constant { public static string ConnString = ConfigurationManager.ConnectionStrings["ConnString"].ConnectionString; } which would throw exception on LIVE: Type initialize failed for Constant ctor If I change the class to: public class Constant { public static string ConnString { get { return ConfigurationManager.ConnectionStrings["ConnString"].ConnectionString; } } } it works. I wasted 2 hours behind this but I still don't know why would this happen. Any ideas? Note: The 1st class used to work on DEV environment but not on LIVE. The 2nd class works on DEV and also on Production. I am using VS2010 on production and Asp.Net 4.0 Website project. I am totally amazed by this inconsistency to say the least! Edit: This class was in App_Code folder.

    Read the article

  • Advanced LINQ Update Statement

    - by user1902490
    I have a data base with Price_old like: Date --- Hour --- Price _____________________________ Jan 1 --- 1 --- $3.0 Jan 1 --- 2 --- $3.1 Jan 1 --- 3 --- $3.3 Jan 1 --- 4 --- $3.15 Jan 2 --- 1 --- $2.95 Jan 2 --- 2 --- $3.2 Jan 2 --- 3 --- $3.05 What I then have is a spreadsheet, with the same structure, that I will be reading into a datatable, I'll call the new datatable Price_New, note that price new may not have all the same date/hours as Price_Old So, I end up with 2 datatables, Price_Old, and Price_New, and what I need to do is update Price_old with the new prices in Price_New, and then commit those new prices to the Database. I am kinda new to LINQ (about 30 mins of experience) and would really appreciate if someone could give me a pointer or two on whether or not this is doing in LINQ and what the best method would be. Thanks

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >