Search Results

Search found 108959 results on 4359 pages for 'ado net data services'.

Page 47/4359 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Windows components in .net

    - by JGC
    hi I need a component in .net which able me to partition a year to some part which is making by clicking at the beginning of the part and click again at the end of that. the shape below is a sample of my need but I create it by buttons and back-color of them for showing for you: I don't know the name of this component to search for that. does anyone know this component or something like this? thank you

    Read the article

  • Upgrading to ASP.NET 3.5

    - by rs
    I have a server with some asp, asp.net 1.0 and 2.0 running on them. Now I'm planning to host 3.5 apps on them. Do i have to make any changes to server other than installing framework to make it handle all my previous version apps? Do i have to install new IIS or i can use same iis for 3.5? Do i have to install ajax newer version to suport ajax 3.5?

    Read the article

  • Guessess of my session value conflicts

    - by SmartestVEGA
    I have a asp.net web form which will submit information to come as emails. Whenever user fill the form and click on submit button,the information user entered will be sent as email. This web form has 4 page. but the web form will not use all 4 page on all requests. if the user select a particular value in first page, the form will bypass the 3rd page and go the last 4th page(like...page1,2,4). IF it is any other values selected in the first page. form will navigate as page1,2,3,4. So now my problem is when multiple users access the same website, the value in the first page get combines from different users and the form will act abnormally.Sometime it will bypass sometimes it will not bypass the page3 Show below is the variable decalrations: Public strRoleType As String = String.Empty Protected Shared isAreaSelected As Integer = 0 Protected Shared isStoreSelected As Integer = 0 Protected Shared isHeadOfficeSelected As Integer = 0 Protected Shared isRegionSelected As Integer = 0 I guess the problem is with strRoleType variable whether it is getting values from different users. Do any have any work around?

    Read the article

  • Importing an Excel WorkSheet into a Datatable

    - by Nick LaMarca
    I have been asked to create import functionality in my application. I am getting an excel worksheet as input. The worksheet has column headers followed by data. The users want to simply select an xls file from their system, click upload and the tool deletes the table in the database and adds this new data. I thought the best way would be too bring the data into a datatable object and do a foeach for every row in the datatable insert row by row into the db. My question is what can anyone give me code to open an excel file, know what line the data starts on in the file, and import the data into a datable object?

    Read the article

  • Dataset and Hierarchial Data How to Sort

    - by mdjtlj
    This is probably a dumb question, but I've hit a wall with this one at this current time. I have some data which is hierarchial in nature which is in an ADO.NEt dataset. The first field is the ID, the second is the Name, the third is the Parent ID. ID NAME Parent ID 1 Air Handling NULL 2 Compressor 1 3 Motor 4 4 Compressor 1 5 Motor 2 6 Controller 4 7 Controller 2 So the tree would look like the following: 1- Air Handling 4- Compressor 6 - Controller 3 - Motor 2- Compressor 7- Controller 5 - Motor What I'm trying to figure our is how to get the dataset in the same order that ths would be viewed in a treeview, which in this case is the levels at the appropriate levels for the nodes and then the children at the appropriate levels sorted by the name. It would be like binding this to a treeview and then simply working your way down the nodes to get the right order. Any links or direction would be greatly appreciated.

    Read the article

  • replacing data.frame element-wise operations with data.table (that used rowname)

    - by Harold
    So lets say I have the following data.frames: df1 <- data.frame(y = 1:10, z = rnorm(10), row.names = letters[1:10]) df2 <- data.frame(y = c(rep(2, 5), rep(5, 5)), z = rnorm(10), row.names = letters[1:10]) And perhaps the "equivalent" data.tables: dt1 <- data.table(x = rownames(df1), df1, key = 'x') dt2 <- data.table(x = rownames(df2), df2, key = 'x') If I want to do element-wise operations between df1 and df2, they look something like dfRes <- df1 / df2 And rownames() is preserved: R> head(dfRes) y z a 0.5 3.1405463 b 1.0 1.2925200 c 1.5 1.4137930 d 2.0 -0.5532855 e 2.5 -0.0998303 f 1.2 -1.6236294 My poor understanding of data.table says the same operation should look like this: dtRes <- dt1[, !'x', with = F] / dt2[, !'x', with = F] dtRes[, x := dt1[,x,]] setkey(dtRes, x) (setkey optional) Is there a more data.table-esque way of doing this? As a slightly related aside, more generally, I would have other columns such as factors in each data.table and I would like to omit those columns while doing the element-wise operations, but still have them in the result. Does this make sense? Thanks!

    Read the article

  • Changing populated DataTable column data types

    - by TonE
    Hi, I have a System.Data.DataTable which is populated by reading a CSV file which sets the datatype of each column to string. I want to append the contents of the DataTable to an existing database table - currently this is done using SqlBulkCopy with the DataTable as the source. However, the column data types of the DataTable need to be changed to match the schema of the target database table, handling null values. I am not very familiar with ADO.NET so have been search for a clean way of doing this? Thanks.

    Read the article

  • .NET Framework 1.1 on IIS 7

    - by Zack Peterson
    I have inherited a .NET Framework 1.1 web site that I must host with IIS 7 on Windows Server 2008. I'm having some trouble. 1. Installation I installed .NET Framework 1.1 following these instructions. The installation automatically created a new Application Pool "ASP.NET 1.1". I use that. 2. Trouble When I launch the web site I see web.config runtime errors: The tag contains an invalid value for the 'culture' attribute. I fix that one and then see: Child nodes are not allowed. I don't want to keep playing this whack-a-mole game. Something must be wrong. 3. Am I sure this is .NET 1.1? I examine the automatically created application pool. I see that it's 1.1. Advanced Settings... Basic Settings... This doesn't seem right. While 1.1 is set, it's not an option in the Advanced drop down selectors. And why in the Basic box is it just "v1.1" and not ".NET Framework v1.1.4322"? That would be more consistent. 4. I cannot create other .NET 1.1 App Pools I cannot select .NET Framework 1.1 for other application pools. It's not an option in the drop down selectors. What's up with that? What now? Why isn't v1.1 an option for all AppPools? How can I verify my application is in fact using .NET Framework 1.1? Why might I get these runtime errors?

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • Using Active Directory Web Services in .Net application

    - by Iain Carlin
    Hello, I'm trying to build a .Net application to interrogate Active Directory. From my research, Windows 2008 R2 has Active Directory Web Services (ADWS) built in. I can't find any details or examples anywhere on the web which tell me whether I should be able to use ADWS in a .Net application to read/write AD information. Should I simply be able to add a web reference or is ADWS just for Powershell use. Cheers, Iain

    Read the article

  • ASP.NET Membership API not working on Win2008 server/IIS7

    - by Program.X
    I have a very odd problem. I have a web app that uses the .NET Membership API to provide login functionality. This works fine on my local dev machine, using WebDev 4.0 server. I'm using .NET 4.0 with some URL Rewriting, but not on the pages where login is required. I have a Windows Server 2008 with IIS7 However, the Membership API seemingly does not work on the server. I have set up remote debugging and the LoginUser.LoggedIn event of the LoginUser control gets fired okay, but the MembershipUser is null. I get no answer about the username/password being invalid so it seems to be recognising it. If I enter an invalid username/password, I get an invalid username/password response. Some code, if it helps: <asp:ValidationSummary ID="LoginUserValidationSummary" runat="server" CssClass="validation-error-list" ValidationGroup="LoginUserValidationGroup"/> <div class="accountInfo"> <fieldset class="login"> <legend>Account Information</legend> <p> <asp:Label ID="UserNameLabel" runat="server" AssociatedControlID="UserName">Username:</asp:Label> <asp:TextBox ID="UserName" runat="server" CssClass="textEntry"></asp:TextBox> <asp:RequiredFieldValidator ID="UserNameRequired" runat="server" ControlToValidate="UserName" CssClass="validation-error" Display="Dynamic" ErrorMessage="User Name is required." ToolTip="User Name is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p> <asp:Label ID="PasswordLabel" runat="server" AssociatedControlID="Password">Password:</asp:Label> <asp:TextBox ID="Password" runat="server" CssClass="passwordEntry" TextMode="Password"></asp:TextBox> <asp:RequiredFieldValidator ID="PasswordRequired" runat="server" ControlToValidate="Password" CssClass="validation-error" Display="Dynamic" ErrorMessage="Password is required." ToolTip="Password is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p> <asp:CheckBox ID="RememberMe" runat="server"/> <asp:Label ID="RememberMeLabel" runat="server" AssociatedControlID="RememberMe" CssClass="inline">Keep me logged in</asp:Label> </p> </fieldset> <p class="login-action"> <asp:Button ID="LoginButton" runat="server" CommandName="Login" CssClass="submitButton" Text="Log In" ValidationGroup="LoginUserValidationGroup"/> </p> and the code behind: protected void Page_Load(object sender, EventArgs e) { LoginUser.LoginError += new EventHandler(LoginUser_LoginError); LoginUser.LoggedIn += new EventHandler(LoginUser_LoggedIn); } void LoginUser_LoggedIn(object sender, EventArgs e) { // this code gets run so it appears logins work Roles.DeleteCookie(); // this behaviour has been removed for testing - no difference } void LoginUser_LoginError(object sender, EventArgs e) { HtmlGenericControl htmlGenericControl = LoginUser.FindControl("errorMessageSpan") as HtmlGenericControl; if (htmlGenericControl != null) htmlGenericControl.Visible = true; } I have "Fiddled" with the Login form reponse and I get the following Cookie-Set headers: Set-Cookie: ASP.NET_SessionId=lpyyiyjw45jjtuav1gdu4jmg; path=/; HttpOnly Set-Cookie: .ASPXAUTH=A7AE08E071DD20872D6BBBAD9167A709DEE55B352283A7F91E1066FFB1529E5C61FCEDC86E558CEA1A837E79640BE88D1F65F14FA8434AA86407DA3AEED575E0649A1AC319752FBCD39B2A4669B0F869; path=/; HttpOnly Set-Cookie: .ASPXROLES=; expires=Mon, 11-Oct-1999 23:00:00 GMT; path=/; HttpOnly I don't know what is useful here because it is obviously encrypted but I find the .APXROLES cookie having no value interesting. It seems to fail to register the cookie, but passes authentication

    Read the article

  • Accessing Sabre Web Services using PHP

    - by Peter
    I have been approached to create a website using Sabre Web Services to power the reservations system. All documentation I have seen refers to .NET or Java solutions, I was in doubt whether PHP can be used as access is performed using SOAP. I have found no further information about this, I assume the answer is yes, but wonder why there is not a single reference to this being possible - all solutions seem to be .NET Any suggestions? Thanks!

    Read the article

  • Ado.Net Entity produces "namespace cannot be found"

    - by Dave
    I've seen several possible solutions to this, but none have worked for me. After adding a ADO.NET Entity Data Model to my .Net Forms C# web project, I am unable to use it. Perhaps I made a mistake adding it? The name of the file added is QcFormData.edmx. In my code, perhaps I'm instantiating it incorrectly? I tried adding the line: QcFormDataContainer db = new QcFormDataContainer(); It appears in Intellisense, but when compiling I get the error : Error 13 The type or namespace name 'QcFormDataContainer' could not be found (are you missing a using directive or an assembly reference?) I've followed the suggestions that I found online that did not help: 1) made sure there is "using System.Data.Entity" 2) made sure the dll exists. 3) made sure the reference exists. 4) one post said use using System.Web.Data.Entity; but I do not see that available. What am I missing? QcFormData.edmx <?xml version="1.0" encoding="utf-8"?> <edmx:Edmx Version="3.0" xmlns:edmx="http://schemas.microsoft.com/ado/2009/11/edmx"> <!-- EF Runtime content --> <edmx:Runtime> <!-- SSDL content --> <edmx:StorageModels> <Schema Namespace="MyCocoModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2008" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/11/edm/ssdl"> <EntityContainer Name="MyCocoModelStoreContainer"> <EntitySet Name="QcFieldValues" EntityType="MyCocoModel.Store.QcFieldValues" store:Type="Tables" Schema="dbo" /> </EntityContainer> <EntityType Name="QcFieldValues"> <Key> <PropertyRef Name="ID" /> </Key> <Property Name="ID" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="FieldID" Type="nvarchar" MaxLength="100" /> <Property Name="FieldValue" Type="nvarchar" MaxLength="100" /> <Property Name="DateTimeAdded" Type="datetime" /> <Property Name="OrderReserveNumber" Type="nvarchar" MaxLength="50" /> </EntityType> </Schema> </edmx:StorageModels> <!-- CSDL content --> <edmx:ConceptualModels> <Schema Namespace="MyCocoModel" Alias="Self" p1:UseStrongSpatialTypes="false" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns:p1="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2009/11/edm"> <EntityContainer Name="MyCocoEntities" p1:LazyLoadingEnabled="true"> <EntitySet Name="QcFieldValues" EntityType="MyCocoModel.QcFieldValue" /> </EntityContainer> <EntityType Name="QcFieldValue"> <Key> <PropertyRef Name="ID" /> </Key> <Property Name="ID" Type="Int32" Nullable="false" p1:StoreGeneratedPattern="Identity" /> <Property Name="FieldID" Type="String" MaxLength="100" Unicode="true" FixedLength="false" /> <Property Name="FieldValue" Type="String" MaxLength="100" Unicode="true" FixedLength="false" /> <Property Name="DateTimeAdded" Type="DateTime" Precision="3" /> <Property Name="OrderReserveNumber" Type="String" MaxLength="50" Unicode="true" FixedLength="false" /> </EntityType> </Schema> </edmx:ConceptualModels> <!-- C-S mapping content --> <edmx:Mappings> <Mapping Space="C-S" xmlns="http://schemas.microsoft.com/ado/2009/11/mapping/cs"> <EntityContainerMapping StorageEntityContainer="MyCocoModelStoreContainer" CdmEntityContainer="MyCocoEntities"> <EntitySetMapping Name="QcFieldValues"> <EntityTypeMapping TypeName="MyCocoModel.QcFieldValue"> <MappingFragment StoreEntitySet="QcFieldValues"> <ScalarProperty Name="ID" ColumnName="ID" /> <ScalarProperty Name="FieldID" ColumnName="FieldID" /> <ScalarProperty Name="FieldValue" ColumnName="FieldValue" /> <ScalarProperty Name="DateTimeAdded" ColumnName="DateTimeAdded" /> <ScalarProperty Name="OrderReserveNumber" ColumnName="OrderReserveNumber" /> </MappingFragment> </EntityTypeMapping> </EntitySetMapping> </EntityContainerMapping> </Mapping> </edmx:Mappings> </edmx:Runtime> <!-- EF Designer content (DO NOT EDIT MANUALLY BELOW HERE) --> <Designer xmlns="http://schemas.microsoft.com/ado/2009/11/edmx"> <Connection> <DesignerInfoPropertySet> <DesignerProperty Name="MetadataArtifactProcessing" Value="EmbedInOutputAssembly" /> </DesignerInfoPropertySet> </Connection> <Options> <DesignerInfoPropertySet> <DesignerProperty Name="ValidateOnBuild" Value="true" /> <DesignerProperty Name="EnablePluralization" Value="True" /> <DesignerProperty Name="IncludeForeignKeysInModel" Value="True" /> <DesignerProperty Name="CodeGenerationStrategy" Value="None" /> </DesignerInfoPropertySet> </Options> <!-- Diagram content (shape and connector positions) --> <Diagrams></Diagrams> </Designer> </edmx:Edmx>

    Read the article

  • HTTP Push from SQL Server — Comet SQL

    Article provides example solution for presenting data in "real-time" from Microsoft SQL Server in HTML browser. Article presents how to implement Comet functionality in ASP.NET and how to connect Comet with Query Notification from SQL Server.

    Read the article

  • My VS 2010 and ASP.NET 4 Talks Online

    - by ScottGu
    The past 7 years I’ve done an annual all day event in Arizona – organized by the most excellent Scott Cate (who always does a phenomenal job organizing the event and making it a great one). Earlier this month I visited and presented 4+ hours of content covering VS 2010, ASP.NET 4 and ASP.NET MVC 2.  NextSlide.com – a great .NET shop local to Arizona who has a great product for sharing presentations – volunteered to record the talks and publish them for free using their online presentation tool.  The recordings they did turned out really, really great – and their online player (which combines slides + camera of me + demos in one experience) is awesome.  Below you can watch the first two segments of my event – which cover VS 2010 and ASP.NET 4 – for free online using the NextSlide.com player experience.  I’ll post a link to my ASP.NET MVC 2 segment a little later in a separate blog post.  If you’ve never seen my present these talks before and are interested in the content then I’d recommend checking them out – as these recordings do a really good job capturing them. Part 1 - VS 2010 This is a 49 minute segment that starts the event and covers a bunch of the new improvements in VS 2010.  You can launch the presentation directly here or watch it inline below.  You can download powerpoint versions of my slides here. Part 2- ASP.NET 4 This 61 minute segment comes next and drills into some of the framework improvements with ASP.NET 4.  It also goes further on some of the web specific tooling improvements in VS 2010 – and towards the end demonstrates some of the great new end-to-end web deployment features provided with VS 2010 (which work for both ASP.NET Web Forms and ASP.NET MVC applications). You can launch the presentation directly here or watch it inline below: Learning More about VS 2010 and ASP.NET 4 I’ve been working on a series of blog post about VS 2010 and .NET 4.  Many of the features I covered in my two talks above are described in more detail in posts within the series.  You can read all of them here. I’ll be continuing adding to the series via my blog, so stay tuned for more in-depth posts about a bunch more new features. Hope this helps, Scott P.S. People often ask whether they can re-use the slides+demos I use in my talks for talks of their own.  The answer to this is always absolutely! No need to ask permission.  Feel free to re-use all of my slides for talks of your own. P.P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • PathTooLongException after migrating from ASP.NET MVC 1 to ASP.NET MVC 2

    - by admax
    I had updated my app from MVC 1 to MVC 2. After that some pages throws PathTooLongException: [PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.] System.IO.Path.SafeSetStackPointerValue(Char* buffer, Int32 index, Char value) +7493057 System.IO.Path.NormalizePathFast(String path, Boolean fullCheck) +387 System.IO.Path.NormalizePath(String path, Boolean fullCheck) +36 System.IO.Path.GetFullPathInternal(String path) +21 System.Security.Util.StringExpressionSet.CanonicalizePath(String path, Boolean needFullPath) +73 System.Security.Util.StringExpressionSet.CreateListFromExpressions(String[] str, Boolean needFullPath) +278 System.Security.Permissions.FileIOPermission.AddPathList(FileIOPermissionAccess access, AccessControlActions control, String[] pathListOrig, Boolean checkForDuplicates, Boolean needFullPath, Boolean copyPathList) +87 System.Security.Permissions.FileIOPermission..ctor(FileIOPermissionAccess access, String path) +65 System.Web.InternalSecurityPermissions.PathDiscovery(String path) +29 System.Web.HttpRequest.MapPath(VirtualPath virtualPath, VirtualPath baseVirtualDir, Boolean allowCrossAppMapping) +146 System.Web.HttpRequest.MapPath(VirtualPath virtualPath) +37 System.Web.HttpServerUtility.Execute(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage) +43 System.Web.HttpServerUtility.Execute(IHttpHandler handler, TextWriter writer, Boolean preserveForm) +28 System.Web.HttpServerUtilityWrapper.Execute(IHttpHandler handler, TextWriter writer, Boolean preserveForm) +22 System.Web.Mvc.ViewPage.RenderView(ViewContext viewContext) +284 System.Web.Mvc.WebFormView.RenderViewPage(ViewContext context, ViewPage page) +82 System.Web.Mvc.WebFormView.Render(ViewContext viewContext, TextWriter writer) +85 System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) +267 System.Web.Mvc.ControllerActionInvoker.InvokeActionResult(ControllerContext controllerContext, ActionResult actionResult) +10 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +320 System.Web.Mvc.Controller.ExecuteCore() +104 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +36 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<c_DisplayClass8.b_4() +34 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c__DisplayClass81.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult1.End() +53 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +30 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8678910 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 I know the issue with 260-character-url-lenght in ASP.NET, but my app works fine before update to ASP.NET MVC 2.0!

    Read the article

  • Partial view links not working in Fire Fox

    - by user329540
    I have a MVC4 asp.net application, I have two layouts a main layout for the main page and a second layout for the nested pages. The problem I have is with the second layout, on this layout I call a partial view which has my navigation links. In IE the navigation menu displays fine and when each item is clicked it navigates as expected. However in FF when the page renders the navigation bar is displayed but it has no 'click functionality' if you will its as if its simply text. My layout of nested page: <header> <img src="../../Images/fronttop.png" id="nestedPageheader" alt="Background Img"/> <div class="content-wrapper"> <section > <nav> <div id="navcontainer"> </div> </nav> </section> <div> </header> The script to retreive partial view and information for dynamic links on layout page. <script type="text/javascript"> var menuLoaded = false; $(document).ready(function () { if($('#navcontainer')[0].innerHTML.trim() == "") { $.ajax({ url: "@Url.Content("~/Home/MenuLayout")", type: "GET", success: function (response, status, xhr) { var nvContainer = $('#navcontainer'); nvContainer.html(response); menuLoaded = true; }, error: function (XMLHttpRequest, textStatus, errorThrown) { var nvContainer = $('#navcontainer'); nvContainer.html(errorThrown); } }); } }); </script> May partial view: @model Mscl.OpCost.Web.Models.stuffmodel <div class="menu"> <ul> <li><a>@Html.ActionLink("Home", "Index", "Home")</a></li> <li><a>@Html.ActionLink("some stuff", "stuffs", "stuff")</a></li> <li> <h5><a><span>somestuff</span></a></h5> <ul> <li><a>stuffs1s</a> <ul> @foreach (var image in Model.stuffs.Where(g => g.Grouping == 1)) { <li> <a>@Html.ActionLink(image.Title, "stuffs", "stuff", new { Id = image.CategoryId }, null)</a> </li> } </ul> </li> </ul> </il> </ul> </div> I need to know why this works fine in IE but why its not working in FF(all versions). Any assistance would be appreciated.

    Read the article

  • PHP - post data ends when '&' is in data.

    - by Phil Jackson
    Hi all, im posting data using jquery/ajax and PHP at the backend. Problem being, when I input something like 'Jack & Jill went up the hill' im only recieving 'Jack' when it gets to the backend. I have thrown an error at the frontend before that data is sent which alerts 'Jack & Jill went up the hill'. When I put die(print_r($_POST)); at the very top of my index page im only getting [key] => Jack how can I be loosing the data? I thought It may have been my filter; <?php function filter( $data ) { $data = trim( htmlentities( strip_tags( mb_convert_encoding( $data, 'HTML-ENTITIES', "UTF-8") ) ) ); if ( get_magic_quotes_gpc() ) { $data = stripslashes( $data ); } //$data = mysql_real_escape_string( $data ); return $data; } echo "<xmp>" . filter("you & me") . "</xmp>"; ?> but that returns fine in the test above you &amp; me which is in place after I added die(print_r($_POST));. Can anyone think of how and why this is happening? Any help much appreciated. Regards, Phil.

    Read the article

  • Forms bound to updateable ADO recordsets are not updateable when the source includes a JOIN

    - by Art
    I'm developing an application in Access 2007. It uses an .accdb front end connecting to an SQL Server 2005 backend. I use forms that are bound to ADO recordsets at runtime. For the sake of efficiency, the recordsets usually contain only one record, and are queried out on the server: Public Sub SetUpFormRecordset(cn As ADODB.Connection, rstIn As ADODB.Recordset, rstSource As String) Dim cmd As ADODB.Command Dim I As Long Set cmd = New ADODB.Command cn.Errors.Clear ' Recordsets based on command object Execute method are Read Only! With cmd Set .ActiveConnection = cn .CommandType = adCmdText .CommandText = rstSource End With With rstIn .CursorType = adOpenKeyset .LockType = adLockPessimistic 'Check the locktype after opening; optimistic locking is worthless on a bound End With ' form, and ADO might open optimistically without firing an error! rstIn.Open cmd, , adOpenKeyset, adLockPessimistic 'This should run the query on the server and return an updatable recordset With cn If .Errors.Count <> 0 Then For Each errADO In .Errors Call HandleADOErrors(.Errors(I)) I = I + 1 Next errADO End If End With End Sub rstSource (the string containg the TSQL on which the recordset is based) is assembled by the calling routine, in this case from the Open event of the form being bound: Private Sub Form_Open(Cancel As Integer) Dim rst As ADODB.Recordset Dim strSource As String, DefaultSource as String Dim lngID As Long lngID = Forms!MyParent.CurrentID strSource = "SELECT TOP (100) PERCENT dbo.Customers.CustomerID, dbo.Customers.LegacyID, dbo.Customers.Active, dbo.Customers.TypeID, dbo.Customers.Category, " & _ "dbo.Customers.Source, dbo.Customers.CustomerName, dbo.Customers.CustAddrID, dbo.Customers.Email, dbo.Customers.TaxExempt, dbo.Customers.SalesTaxCode, " & _ "dbo.Customers.SalesTax2Code, dbo.Customers.CreditLimit, dbo.Customers.CreationDate, dbo.Customers.FirstOrder, dbo.Customers.LastOrder, " & _ "dbo.Customers.nOrders, dbo.Customers.Concurrency, dbo.Customers.LegacyLN, dbo.Addresses.AddrType, dbo.Addresses.AddrLine1, dbo.Addresses.AddrLine2, " & _ "dbo.Addresses.City, dbo.Addresses.State, dbo.Addresses.Country, dbo.Addresses.PostalCode, dbo.Addresses.PhoneLandline, dbo.Addresses.Concurrency " & _ "FROM dbo.Customers INNER JOIN " & _ "dbo.Addresses ON dbo.Customers.CustAddrID = dbo.Addresses.AddrID " strSource = strSource & "WHERE dbo.Customers.CustomerID= " & lngID With Me 'Default is Set up for editing one record If Not Nz(.RecordSource, vbNullString) = vbNullString Then If .Dirty Then .Dirty = False 'Save any changes on the form .RecordSource = vbNullString End If If rst Is Nothing Then 'Might not be first time through DefaultSource = .RecordSource Else rst.Close Set rst = Nothing End If End With Set rst = New ADODB.Recordset Call setupformrecordset(dbconn, rst, strSource) 'dbconn is a global variable With Me Set .Recordset = rst End With End Sub The recordset that is returned from setupformrecordset is fully updateable, and its .Supports property shows this. It can be edited and updated in code. The entire form, however, is read only, even though it's .AllowEdits and .AllowAdditions properties are both true. Even the fields from the right hand side (the 'many' side) cannot be edited. Removing the INNER JOIN clause from the TSQL (restricting strSource to one table) makes the form fully editable. I've verified that the TSQL includes priimary key fields from both tables, and each table includes a timestamp field for concurrency. I tried changing the .CursorType and .CursorLocation properties of the recordset to no avail. What am I doing wrong?

    Read the article

  • Ajax comments form in ASP.NET MVC2

    - by Artiom Chilaru
    I've been playing around with different aspects of MVC for some time now, and I've reached a situation where I'm not sure what would be the best way to solve a problem. I'm hoping that the SO community will help me out here :P I've seen a number of examples of Ajax.BeginForm on the internet, and it seems like a very nifty idea. E.g. you have a dropdown where you select a customer - and on selecting one it will load this client's details in some placeholder on the page. This works perfectly fine. But what to do if you want to tie in some validation in the box? Just hypothetically, imagine an article page, and user comments in the bottom. Below the comments area there's an ajax-y "Add comment" box. When a user adds a comment, it will appear in the comments area, below the last comment there. If I set the Ajax.BeginForm to Append the result of the call to the Comments area, it will work fine. But what if the data posted is not valid? Instead of appending a "successful" comment to the comments area I have to show the user validation errors. At this point I decided that the area INSIDE the Ajax.BeginForm will be inside a partial, and the form's submits will return this partial. Validation works fine. On each submit we reload the contents inside the form element. But how to add the successful comment to the top? Other things to consider: The comment form also has a "Preview" button. When the user clicks on Preview, I should load the rendered comment into a preview box. This will probably be inside the form area as well. I was thinking of using Json results instead. When the user submits the form, the server code will generate a Json object with a Success value, and html rendered partials as some properties. Something like { "success": true, "form": "<html form data>", "comment": "successful comment html to inject into the page" } This would be a perfect solution, except there's no way in MVC to render a partial into a string, inside the controller (separation of context, remember?). So.. what should I do then? Any "correct" way to implement this?

    Read the article

  • Html.DropDownListFor not behaving as expected ASP.net MVC

    - by rybl
    Hello, I am new to ASP.net MVC and I am having trouble getting dropdown lists to work correctly. I have a strongly typed view that is attempting to use a Html.DropDownListFor as follows: <%=Html.DropDownListFor(Function(model) model.Arrdep, Model.ArrdepOptions)%> I am populating the list with a property in my model as follows: Public ReadOnly Property ArrdepOptions() As List(Of SelectListItem) Get Dim list As New List(Of SelectListItem) Dim arriveListItem As New SelectListItem() Dim departListItem As New SelectListItem() arriveListItem.Text = "Arrive At" arriveListItem.Value = ArriveDepart.Arrive departListItem.Text = "Depart At" departListItem.Value = ArriveDepart.Depart Select Case Me.Arrdep Case ArriveDepart.Arrive : arriveListItem.Selected = True Case Else : departListItem.Selected = True End Select list.Add(departListItem) list.Add(arriveListItem) Return list End Get End Property The Select Case works find and it sets the right SelectListItem as Selected, but when my view renders the dropdown list no matter what is marked as selected the generated HTML does not have anything selected. Am I obviously doing something wrong or missing something, but I can't for the life of me figure out what.

    Read the article

  • ASP.NET MVC Areas Application Using Multiple Projects

    - by harrisonmeister
    Hi I have been following this tutorial: http://msdn.microsoft.com/en-us/library/ee307987(VS.100).aspx#registering_routes_in_account_and_store_areas and have an application (a bit more complex) like this set up. All the areas are working fine, however I have noticed that if I change the project name of the Accounts project to say Areas.Accounts, that it wont find any of my views within the accounts project due to the Area name not being the same as the project name e.g. the accounts routes.cs file still has this: public override string AreaName { get { return "Accounts"; } } Does anyone know why I would have to change it to this: public override string AreaName { // Needs to match the project name? get { return "Areas.Accounts"; } } for my views in the accounts project to work? I would really like the AreaName to still be Accounts, but for ASP.net MVC to look in the "Views\Areas\Areas.Accounts\" folder when its all munged into one project, rather than trying to find it within "View\Areas\Accounts\" Thanks Mark

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • ASP.NET MVC Get a list of users with particular profile properties

    - by Sam Huggill
    Hi, I'm using ASP.NET MVC 1 and I have added a custom Profile class using the WebProfile Builder VS add-in (found here: http://code.msdn.microsoft.com/WebProfileBuilder/Release/ProjectReleases.aspx?ReleaseId=980). On one of my forms I want a drop-down list of all users who share a specific profile value in common. I can see that I can get a list of all users using: Membership.GetAllUsers() However I cannot see how to get all users who have a specific profile value, which in my case is CellId. Am I approaching this in the right way? I have used membership roles to define which users are administrators etc, but profiles seems like the right place to group users. Any pointers both in specifics of how to access the user list but also comments on whether am I pursuing the right avenue here would be greatly appreciated. Many thanks, Sam

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >