Search Results

Search found 109760 results on 4391 pages for 'ado net entity data model'.

Page 190/4391 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Add note model in Rails

    - by dannymcc
    Hi Everyone, I am following the 15 minute blog tutorial on Ruby on Rails .com: http://media.rubyonrails.org/video/rails_blog_2.mov and am stumbling into some issues. I am using the following alterations to the names in the tutorial: posts = kases comments = notes I have setup the models as follows: class Kase < ActiveRecord::Base validates_presence_of :jobno has_many :notes belongs_to :company # foreign key: company_id belongs_to :person # foreign key in join table belongs_to :surveyor, :class_name => "Company", :foreign_key => "appointedsurveyor_id" belongs_to :surveyorperson, :class_name => "Person", :foreign_key => "surveyorperson_id" def to_param jobno end and... class Note < ActiveRecord::Base belongs_to :kase end The Notes controller look like this: # POST /notes # POST /notes.xml def create @kase = Kase.find(params[:kase_id]) @note = @kase.notes.build(params[:note]) redirect_to @kase end and the database scheme for Kases looks like this: create_table "notes", :force => true do |t| t.integer "kase_id" t.text "body" t.datetime "created_at" t.datetime "updated_at" end and for kases... create_table "kases", :force => true do |t| t.string "jobno" t.date "dateinstructed" t.string "clientref" t.string "clientcompanyname" t.text "clientcompanyaddress" t.string "clientcompanyfax" t.string "casehandlername" t.string "casehandlertel" t.string "casehandleremail" t.text "casesubject" t.string "transport" t.string "goods" t.string "claimantname" t.string "claimantaddressline1" t.string "claimantaddressline2" t.string "claimantaddressline3" t.string "claimantaddresscity" t.string "claimantaddresspostcode" t.string "claimantcontact" t.string "claimanttel" t.string "claimantmob" t.string "claimantemail" t.string "claimanturl" t.string "lyingatlocationname" t.string "lyingatlocationaddressline1" t.string "lyingatlocationaddressline2" t.string "lyingatlocationaddressline3" t.string "lyingatlocationaddresscity" t.string "lyingatlocationaddresspostcode" t.string "lyingatlocationcontactname" t.string "lyingattel" t.string "lyingatmobile" t.string "lyingatlocationurl" t.text "comments" t.string "invoicenumber" t.string "netamount" t.string "vat" t.string "grossamount" t.date "dateclosed" t.date "datepaid" t.datetime "filecreated" t.string "avatar_file_name" t.string "avatar_content_type" t.integer "avatar_file_size" t.datetime "avatar_updated_at" t.datetime "created_at" t.datetime "updated_at" t.string "kase_status" t.string "invoice_date" t.integer "surveyorperson_id" t.integer "appointedsurveyor_id" t.integer "person_id" t.string "company_id" t.string "dischargeamount" t.string "dishchargeheader" t.text "highrisesubject" end Whenever I enter a note into the kase show view's note entry form: <h2>Notes</h2> <div id="sub-notes"> <%= render :partial => @kase.notes %> </div> <% form_for [@kase, Note.new] do |f| %> <p> <%= f.label :body, "New Note" %><br /> <%= f.text_area :body %> </p> <p><%= f.submit "Add Note" %></p> <% end %> partial: <% div_for note do %> <p> <strong>Created <%= time_ago_in_words(note.created_at) %> ago</strong><br /> <%= h(note.body) %> </p> <% end %> I get the following error: ActiveRecord::RecordNotFound in NotesController#create Couldn't find Kase with ID=Test Case I have tried removing the def to_param jobno end from the kase model, but the same error shows. Any ideas what I'm missing? Thanks, Danny

    Read the article

  • Calling Class from another File ASP.NET VB.NET

    - by davemackey
    Lets say I have a class like this in class1.vb: Public Class my_class Public Sub my_sub() Dim myvar as String myvar = 10 Session("myvar") = myvar End Sub End Class Then I have a ASP.NET page with a code-behind file, default.aspx and default.aspx.vb and I want to call my_class. I'm doing the following, but it doesn't work: Imports my_app.my_class Partial Public Class _default Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender as Object, ByVal e as System.EventArgs) Handles Me.Load my_class() End Sub End Class I get a "Reference to a non-shared member requires an object reference"

    Read the article

  • asp.net "network BIOS command limit has been reached" ASP.NET 2.0 + 3.5

    - by Fermin
    Hi, I'm trying to run tinyMCE texteditor in ASP.NET 2.0 + 3.5 but I get the following error in my web.config file.. An error occurred loading a configuration file: Failed to start monitoring changes to '###\Visual Studio 2005\WebSites\TinyMCE\tinymce\jscripts\tiny_mce\langs' because the network BIOS command limit has been reached. For more information on this error, please refer to Microsoft knowledge base article 810886. Hosting on a UNC share is not supported for the Windows XP Platform. Any ideas how to solve this?

    Read the article

  • PHP cannot find .net assembly in the .net-4 GAC

    - by stacker
    <?php $console = new DOTNET("ProjectX.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=KeyTokenHere-i used a real value here", "ProjectX.Core.Services.ServicX"); echo '1'; ?> The error message: Fatal error: Uncaught exception 'com_exception' with message 'Failed to instantiate .Net object [CreateInstance] [0x80070002] The system cannot find the file specified. What's the problem can be?

    Read the article

  • trace an asp.net website in production - c#/asp.net

    - by uno
    Is there a way that I can trace every method, basically a line trace, in an asp.net web site in production environment? I dont want to go about creating db logging for every line - i see an intermittent error and would like to see every line called and performed by the website per user.

    Read the article

  • WP7 “Phantom Data” Source Possibly Revealed?

    - by Bil Simser
    Recently there’s been rumours floating around regarding “phantom” Windows Phone 7 data being magically sent and received on the latest WP7 phones. The news has mostly been floating around twitter so I didn’t pay it much attention. The BBC Technology News picked it up so I thought I would look more into it myself seeing that we have WP7 phones and maybe there was some truth to all this (and more importantly what was the cause). Full disclosure. I don’t have a lot of data points around this. This is from looking at a few phone logs, changing the configuration and looking back again after the change. I haven’t done a clean baseline test nor have I done testing with hundreds of phones. I leave the experience up to the reader to decide. So I went spelunking into the phone logs to see what was up. Most providers will show you data usage, at least on a daily basis. I lucked out with the provider and plan in that they provide hourly breakdowns. Here’s a snapshot from my usage throughout one night. Timestamp Data Usage 12:38:30 AM 2098 Kilobytes 1:30:30 AM 2 Kilobytes 2:38:30 AM 7118 Kilobytes 3:38:30 AM 6622 Kilobytes 4:38:30 AM 76 Kilobytes 5:38:30 AM 29 Kilobytes 6:38:30 AM 19 Kilobytes 7:38:30 AM 20 Kilobytes So a few observations from this data: Data seems to be collected on a regular basis. Looking at some other people phone logs, the times vary but it’s always hourly. There’s not a tremendous amount of data here (about 16 megabytes) but it seems like a lot for 7 hours The phone was connected to my home Wifi during this period Nothing was running and the phone was in a locked state Like I said, not a lot of data but it adds up. 16MB for 7 hours = about 50MB in a 24 hour period. That’s just plain old data being collected (somewhere, somehow) and not actual usage (Marketplace, Email, Browsing, etc.). Besides, when connected to a WiFi network you shouldn’t be charged data usage from your phone company (in theory, YMMV). After reviewing the logs I made a theory that the only thing that could possibly be sending data is the Feedback feature. With no other apps running under lock, what else could it be? In Windows 7 under your Settings the last option is Feedback. This sends feedback to Microsoft to “help improve Windows Phone”. On this page you have three options: Send feedback and use my cellular data connection Send feedback and (presumably) use my WiFi connection Don’t send feedback Knowing what I know about Microsoft, they do use the feedback data. For example some of the placement and inclusion of features in Office 2007 was based on that Feedback data that Office sends (assuming you had opted in). However in the Privacy Statement (it’s long but a good read at least once in your life), the Phone manual, and every other source I could look at there is no information about how much data it’s planning to send, just that it’s sending some data and that “some data charges with your carrier may apply”. Looking back at the logs, I have to wonder. 6MB at 3:30 and *then* 7MB the next hour. That’s a lot of information. And it adds up. 50MB in a 24 hour period X 30 days puts most people over a normal 1GB plan. And frankly why am I paying for a data plan only to have 80% of it chewed up by Microsoft, with no real benefit to me. If they included porn in the 50mb daily transfer I’d be okay with this, but I don’t see any new movies on my phone. So I turned it off. Set Feedback to disabled and wait. I waited. And waited. And generally didn’t use the phone if I could. The next day I went back to look at the data usage logs from the time period after turning the feedback mechanism off. Here are the results. Timestamp Data Usage 1:19:48 PM 0 Kilobytes 2:19:48 PM 0 Kilobytes 3:19:48 PM 0 Kilobytes 4:19:48 PM 678 Kilobytes (took a phone call) 5:19:48 PM 82 Kilobytes 6:19:48 PM 88 Kilobytes 7:20:30 PM 86 Kilobytes (guess they changed their reporting time) 8:20:30 PM 86 Kilobytes 9:20:30 PM 66 Kilobytes 10:20:30 PM 67 Kilobytes 11:20:30 PM 49 Kilobytes 12:20:30 AM 32 Kilobytes 1:20:30 AM 38 Kilobytes 2:20:31 AM 18 Kilobytes 3:20:31 AM 27 Kilobytes 4:20:31 AM 86 Kilobytes 5:20:31 AM 53 Kilobytes 6:20:31 AM 22 Kilobytes 7:22:15 AM 30 Kilobytes (another reporting time change) 8:22:15 AM 29 Kilobytes 9:22:15 AM 74 Kilobytes 10:22:15 AM 154 Kilobytes (phone call) 11:22:15 AM 12 Kilobytes 12:13:27 PM 49 Kilobytes 1:13:27 PM 197 Kilobytes (phone call) Quite a *drastic* change from what Feedback was turned on. I mean for a 24 hour period (sans 3 phone calls) I consumed about 1MB. Still quite a bit of transfer going on but at least it amounts to 30MB per month, not 30MB per day! Like I said this observation is neither scientific or conclusive. You decide what to do but frankly until Microsoft makes this data transfer exempt from your data plan (like that will happen) I would just turn Feedback off. YMMV.

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • ASP.NET MVC WebService - Security for Industrial Android Clients

    - by Chris Nevill
    I'm trying to design a system that will allow a bunch of Android devices to securely log into an ASP.NET MVC REST Web service. At present neither side are implemented. However there is an ASP.NET MVC website which the web service will site along side. This is currently using forms authentication. The idea will be that the Android devices will download data from the web service and then be able to work offline storing data in their own local databases, where users will be able to make updates to that data, and then syncing updates back to the main server where possible. The web service will be using HTTPS to prevent calls being intercepted and reduce the risk of calls being intercepted. The system is an industrial system and will not be in used by the general Android population. Instead only authorized Android devices will be authorized by the Web Service to make calls. As such I was thinking of using the Android devices serial number as a username and then a generated long password which the device will be able to pick up - once the device has been authorized server side. The device will also have user logins - but these will not be to log into the web service - just the device itself - since the device and user must be able to work offline. So usernames and passwords will be downloaded and stored on the devices themselves. My question is... what form of security is best setup on the web service? Should it use forms Authentication? Should the username and password just be passed in with each GET/POST call or should it start a session as I have with the website? The Android side causes more confusion. There seems to be a number of options here Spring-Android, Volley, Retrofit, LoopJ, Robo Spice which seems to use the aforementioned Spring, Retrofit or Google HttpClient. I'm struggling to find a simple example which authenticates with a forms based authentication system. Is this because I'm going about this wrong? Is there another option that would better suite this?

    Read the article

  • Identity in .NET 4.5&ndash;Part 2: Claims Transformation in ASP.NET (Beta 1)

    - by Your DisplayName here!
    In my last post I described how every identity in .NET 4.5 is now claims-based. If you are coming from WIF you might think, great – how do I transform those claims? Sidebar: What is claims transformation? One of the most essential features of WIF (and .NET 4.5) is the ability to transform credentials (or tokens) to claims. During that process the “low level” token details are turned into claims. An example would be a Windows token – it contains things like the name of the user and to which groups he belongs to. That information will be surfaced as claims of type Name and GroupSid. Forms users will be represented as a Name claim (all the other claims that WIF provided for FormsIdentity are gone in 4.5). The issue here is, that your applications typically don’t care about those low level details, but rather about “what’s the purchase limit of alice”. The process of turning the low level claims into application specific ones is called claims transformation. In pre-claims times this would have been done by a combination of Forms Authentication extensibility, role manager and maybe ASP.NET profile. With claims transformation all your identity gathering code is in one place (and the outcome can be cached in a single place as opposed to multiple ones). The structural class to do claims transformation is called ClaimsAuthenticationManager. This class has two purposes – first looking at the incoming (low level) principal and making sure all required information about the user is present. This is your first chance to reject a request. And second – modeling identity information in a way it is relevant for the application (see also here). This class gets called (when present) during the pipeline when using WS-Federation. But not when using the standard .NET principals. I am not sure why – maybe because it is beta 1. Anyhow, a number of people asked me about it, and the following is a little HTTP module that brings that feature back in 4.5. public class ClaimsTransformationHttpModule : IHttpModule {     public void Dispose()     { }     public void Init(HttpApplication context)     {         context.PostAuthenticateRequest += Context_PostAuthenticateRequest;     }     void Context_PostAuthenticateRequest(object sender, EventArgs e)     {         var context = ((HttpApplication)sender).Context;         // no need to call transformation if session already exists         if (FederatedAuthentication.SessionAuthenticationModule != null &&             FederatedAuthentication.SessionAuthenticationModule.ContainsSessionTokenCookie(context.Request.Cookies))         {             return;         }         var transformer = FederatedAuthentication.FederationConfiguration.IdentityConfiguration.ClaimsAuthenticationManager;         if (transformer != null)         {             var transformedPrincipal = transformer.Authenticate(context.Request.RawUrl, context.User as ClaimsPrincipal);             context.User = transformedPrincipal;             Thread.CurrentPrincipal = transformedPrincipal;         }     } } HTH

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • Should I be learning Linq, Direct SQL Commands (in .net), EF or other?

    - by Wil
    Basically, I have a very good knowledge of plain old SQL coming from Classic ASP programming. Over the past couple of months, I have been learning C# and today was my first full day at MVC 3 (Razor) which I am loving! I need to get back in to Databases and I know that writing SqlCommand everywhere is obviously outdated (although it is nice I can still do it!). I used to go to a great usergroup as an IT Pro and the developer stuff went completely over my head, however I do remember a few things which kept coming up such as LINQ... However, that was some time ago and now the same people on Twitter are saying how out dated it is. I have tried to do research on both and I am clueless as to what direction I should go in, or when to use one over another (if learning both is a good thing). I am more so confused as I thought EF was a part of the .Net Framework, however, reading through the quick start guide, I had to download a component using Nuget. ... Basically I am out of my depth here and just need some honest advice of where to go!

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Exposing Data from Entity Framework

    To continue our series I wanted to look next at how to expose your data from the server side of your application.  The interesting data in your business applications come from a wide variety of data sources.  From a SQL Database, from Oracle DB, from Sql Azure, from Sharepoint, from a mainframe and you have likely already chosen a datamodel such as NHibernate, Linq2Sql, Entity Framework, Stored Proc, a service.   The goal of RIA Service in this release is to make it easy to...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Multi tenant membership provider ASP.NET MVC

    - by Masna
    Hello, I'm building a multi-tenant app with ASP.NET MVC and have a problem with validating users. Situation I have: -a table with User(ID, Name, FirstName, Email) This table is made, so that a users who is registered in two tenants doesn't need to login again. -a table with Tentantuser(ID, TenantID, UserID (FK to table User), UserName, Loginname, Password, Active) This table contains de login en password for one tenant. Example: UserX is registered in TenantA and TenantB UserX logs in on TenantA, with his login and password for TenantA System verifies or login and password are correct in the table TenantUser System validates UserX which userID corresponds to the Id in the table User UserX goes to TenantB and is automatically logged in My problem: How can I create a custom Provider so I can check the login & password in a tenant? For example: public abstract bool ValidateUser(string username,string password); How can I say to my provider on which tenant the user is? How can I change this in something like: public overrides bool ValidateUser(string username,string password, string tenant); ? Or what is another way to solve this issue?

    Read the article

  • Problems with MembershipUser / System.Web.ApplicationServices when upgrading to .net 4

    - by DaveK
    I have a large vb.net web project that I am trying to updgrade to .net4/VS2010. During compile I get the following error: 'System.Web.Security.MembershipUser' in assembly 'System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' has been forwarded to assembly 'System.Web.ApplicationServices'. Either a reference to 'System.Web.ApplicationServices' is missing from your project or the type 'System.Web.Security.MembershipUser' is missing from assembly 'System.Web.ApplicationServices'. I researched the issue and the error is accurate. I added a reference to System.Web.ApplicationServices but I am still having problems. The project does not seem to recognize that the reference has been added. Intellisense will not pick it up, I can not use it in an Import statement, etc ... The assembly is listed in the compile section of my web.config: <assemblies> ... <add assembly="System.Web.ApplicationServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> Any ideas?

    Read the article

  • asp.net ajax control toolkit combobox displays incorectly when in fieldset with style of position:re

    - by Jon P
    I currently have an Instance of the ASP.net ajax control toolkit combo box residing in a field set with a style of position:releative applied. The control also sits in a very plain table (please no comments about using tables for lay-out, I know it is evil and try to avoid it). There are two problems with the display of the list: The list does not sit flush with the text box. In I.E. 7 (which is the majority of my target audience, intranet where IE7 is the company standard) the list display about 10px below the fieldset, which is what the bottom margin of the fieldset is set to. In FF 2.0 the list sits sinificantly lower and off-set to the right. Below the filed set there is more content in a div, also with a style of position:relative applied. The list from the combo box displays behind the content of this div, which is obviouly an issue. Removing position:releative from the fieldset resolves the display issue of the combo box, but results in other unwanted display side effects. My interim workaround is to specifically restyle this fieldset without the position:absolute style, but I'm hoping for a better solution. Thanks

    Read the article

  • RSACryptoServiceProvider CryptographicException System Cannot Find the File Specified under ASP.NET

    - by Will Hughes
    I have an application which is making use of the RSACryptoServiceProvider to decrypt some data using a known private key (stored in a variable). When the IIS Application Pool is configured to use Network Service, everything runs fine. However, when we configure the IIS Application Pool to run the code under a different Identity, we get the following: System.Security.Cryptography.CryptographicException: The system cannot find the file specified. at System.Security.Cryptography.Utils.CreateProvHandle(CspParameters parameters, Boolean randomKeyContainer) at System.Security.Cryptography.RSACryptoServiceProvider.ImportParameters(RSAParameters parameters) at System.Security.Cryptography.RSA.FromXmlString(String xmlString) The code is something like this: byte[] input; byte[] output; string private_key_xml; var provider = new System.Cryptography.RSACryptoServiceProvider(this.m_key.Key_Size); provider.FromXmlString(private_key_xml); // Fails Here when Application Pool Identity != Network Service ouput = provider.Decrypt(input, false); // False = Use PKCS#1 v1.5 Padding There are resources which attempt to answer it by stating that you should give the user read access to the machine key store - however there is no definitive answer to solve this issue. Environment: IIS 6.0, Windows Server 2003 R2, .NET 3.5 SP1

    Read the article

  • Change the default SqlCommand CommandTimeout with configuration rather than recompile?

    - by robertc
    I am supporting an ASP.Net 3.5 web application and users are experiencing a timeout error after 30 seconds when trying to run a report. Looking around the web it seems it's easy enough to change the timeout in the code, unfortunately I'm not able to access the code and recompile. Is there anyway to configure the default for either the web app, the worker process, IIS or the whole machine? Here is the stack trace up to the point where it's in System.Data in case I'm missing some other problem: [SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1948826 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4844747 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlDataReader.ConsumeMetaData() +33 System.Data.SqlClient.SqlDataReader.get_MetaData() +83 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +297 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +954 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +32 System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +141 System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +12 System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) +10 System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) +130 System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) +162 System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) +115 --Edit There must be something outside the code itself - I've downloaded the database and run it against the same web site installed on a test server and it runs for longer than 30 seconds and returns the report. I've compared the machine.config and web.config files from the .Net directory on the live and test and they seem the same, compared the two IIS setups, also looked at the SQL Server configuration and the only difference is that the live server is clustered on 64bit W2K3 while the test server is on 32bit.

    Read the article

  • Core Data Error "Fetch Request must have an entity"

    - by Graeme
    Hi, I've attempted to add the TopSongs parser and Core Data files into my application, and it now builds succesfully, with no errors or warning messages. However, as soon as the app loads, it crashes, giving the following reason: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'executeFetchRequest:error: A fetch request must have an entity.' I have renamed all files, including the .xcdatamodel file. Could this be the problem (the renaming of the .xcdatamodel)? I'm assuming this error means that no data can be found. Thanks.

    Read the article

  • Where does lucene .net cache the search results?

    - by Lanceomagnifico
    Hi, I'm trying to figure out where Lucene stores the cached query results, and how it's configured to do so - and how long it caches for. This is for an ASP.NET 3.5 solution. I'm getting this problem: If I run a search and sort the result by a particular product field, it seems to work the very first time each search and sort combination is used. If I then go in and change some product attributes, reindex and run the same search and sort, I get the products returned in the same order as the very first result. example Product A is named: foo Product B is named: bar For the first search, sort by name desc. This results in: Product A Product B Now mix up the data a bit: Change names to: Product A named: bar Product B named: foo reindex verify that the index contains the changes for these two products. search Result: Product A Product B Since I changed the alphabetical order of the names, I expected: Product B Product A So I think that Lucene is caching the search results. (Which, btw, is a very good thing.) I just need to know where/how to clear these results. I've tried deleting the index files and doing an IISreset to clear the memory, but it seems to have no effect. So I'm thinking there is another set of Lucene files outside of the indexes that Lucene uses for caching. EDIT I just found out that you must create the index for field you wish to sort on as un-tokenized. I had the field as tokenized, so sorting didn't work.

    Read the article

  • Custom ViewModel with MVC 2 Strongly Typed HTML Helpers return null object on Create ?

    - by Barbaros Alp
    Hi, I am having a trouble while trying to create an entity with a custom view modeled create form. Below is my custom view model for Category Creation form. public class CategoryFormViewModel { public CategoryFormViewModel(Category category, string actionTitle) { Category = category; ActionTitle = actionTitle; } public Category Category { get; private set; } public string ActionTitle { get; private set; } } and this is my user control where the UI is <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<CategoryFormViewModel>" %> <h2> <span><%= Html.Encode(Model.ActionTitle) %></span> </h2> <%=Html.ValidationSummary() %> <% using (Html.BeginForm()) {%> <p> <span class="bold block">Baslik:</span> <%=Html.TextBoxFor(model => Model.Category.Title, new { @class = "width80 txt-base" })%> </p> <p> <span class="bold block">Sira Numarasi:</span> <%=Html.TextBoxFor(model => Model.Category.OrderNo, new { @class = "width10 txt-base" })%> </p> <p> <input type="submit" class="btn-admin cursorPointer" value="Save" /> </p> <% } %> When i click on save button, it doesnt bind the category for me because of i am using custom view model and strongly typed html helpers like that <%=Html.TextBoxFor(model => Model.Category.OrderNo) %> How can i fix this ? Thanks in advance

    Read the article

  • Problem binding action parameters using FCKeditor, AJAX and ASP.NET MVC

    - by TonE
    I have a simple ASP.Net MVC View which contains an FCKeditor text box (created using FCKeditor's Javascript ReplaceTextArea() function). These are included within an Ajax.BeginForm helper: <% using (Ajax.BeginForm("AddText", "Letters", new AjaxOptions() { UpdateTargetId = "addTextResult" })) {%> <div> <input type="submit" value="Save" /> </div> <div> <%=Html.TextArea("testBox", "Content", new { @name = "testBox" })%> <script type=""text/javascript""> window.onload = function() { var oFCKeditor = new FCKeditor('testBox') ; var sBasePath = '<%= Url.Content("~/Content/FCKeditor/") %>'; oFCKeditor.BasePath = sBasePath; oFCKeditor.ToolbarSet = "Basic"; oFCKeditor.Height = 400; oFCKeditor.ReplaceTextarea() ; } </script> <div id="addTextResult"> </div> <%} %> The controller action hanlding this is: [ValidateInput(false)] public ActionResult AddText(string testBox) { return Content(testBox); } Upon initial submission of the Ajax Form the testBox string in the AddText action is always "Content", whatever the contents of the FCKeditor have been changed to. If the Ajax form is submitted again a second time (without further changes) the testBox paramater correctly contains the actual contents of the FCKeditor. If I use a Html.TextArea without replacing with FCKeditor it works correctly, and if I use a standard Post form submit inplace of AJAX all works as expected. Am I doing something wrong? If not is there a suitable/straight-forward workaround for this problem?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >