Search Results

Search found 63414 results on 2537 pages for 'net ria services'.

Page 13/2537 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • WCF RIA Services Custom Type with Collection of Custom Types

    - by Blakewell
    Is it possible to have a custom type within a custom type and have the result returned via WCF RIA services? I have the following two classes below, but I can't gain access to the Verticies property within the Polygon class. I assume it is because it is a custom class, or something to do with it being a List collection. Polygon Class public class Polygon { public Polygon() { _vertices = new List<Location>(); } private int _id; [Key] public int Id { get; set; } private List<Location> _vertices; public List<Location> Vertices { get { return _vertices; } set { _vertices = value; } } } Location Class public class Location { public Location() { } /// <summary> /// Default constructor for creating a Location object /// </summary> /// <param name="latitude"></param> /// <param name="longitude"></param> public Location( double latitude, double longitude ) { _latitude = latitude; _longitude = longitude; } private int _id; [Key] public int Id { get { return _id; } set { _id = value; } } private double _latitude; /// <summary> /// Latitude coordinate of the location /// </summary> public double Latitude { get { return _latitude; } set { _latitude = value; } } private double _longitude; /// <summary> /// Longitude coordiante of the location /// </summary> public double Longitude { get { return _longitude; } set { _longitude = value; } } }

    Read the article

  • What's new in ASP.Net 4.5 and VS 2012 - part 1

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into new features available in ASP.Net 4.5 and VS 2012.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. Please find all my posts regarding VS 2012, here .Well I have not exactly kept my promise for writing short blog posts, so I will try to keep this one short. 1) Launch VS 2012 and create a new Web Forms application by going to File - >New Web Site - > ASP.Net Web Forms Site.2) Choose an appropriate name for your web site.3) Build and run your site (CTRL+F5). Then go to View - > Source to see the HTML markup (Javascript e.t.c) that is rendered through the browser.You will see that the ASP.Net team has done a good job to make the markup cleaner and more readable. The ViewState size is significantly smaller compared to its size to earlier versions.Have a look at the picture below 4) Another thing that you must notice is that the new template makes good use of HTML 5 elements.When you view the application through the browser and then go to View Page Source you will see HTML 5 elements like nav,header,section.Have a look at the picture below  5) In VS 2012 we can browse with multiple browsers. There is a very handy dropdown that shows all the browsers available for viewing the website.Have a look at the picture below When I select the option Browse With... I see another window and I can select any of the installed browsers I want and also set the default browser. Have a look at the picture below  When I click Browse, all the selected browsers fire up and I can view the website in all of them.Have a look at the picture below There will be more posts soon looking into new features of ASP.Net 4.5 and VS 2012Hope it helps!!!

    Read the article

  • Lucene.net create+lock errors in ASP.NET

    - by acidzombie24
    I have an issue with Lucene.net. It throws a lock exception. After poking around i notice these things. My code below works in an app bit when calling in Application_Start i get a NoSuchDirectoryException. Not closing the writer (as my code doesnt do below) i WILL get a LockObtainFailedException with the message Lock obtain timed out: SimpleFSLock@<FULL_PATH> from either app or asp.net These thread hinted when spawning threads they get less permissions then i do (but! my main thread has problems as well...) and one solution is to impersonate IIS. I am using visual studios 2010. I am not sure how full blown it is but my attempt to impersonate it failed. So my question is how do i have lucene create the directory and not throw an exception if dont close the writer for some reason (such as power going out)? http://stackoverflow.com/questions/2341163/why-is-my-lucene-index-getting-locked/2499285#2499285 http://stackoverflow.com/questions/1123517/lucene-net-and-i-o-threading-issue/1123981#1123981 static IndexWriter writer = null; static void lucene_init() { bool create = false; //I now use a full path. I still get NoSuchDirectoryException //string dirname = "LuceneIndex_z"; if (System.IO.Directory.Exists(dirname) == false) create = true; var directory = FSDirectory.GetDirectory(dirname); var analyzer = new StandardAnalyzer(); writer = new IndexWriter(directory, analyzer, create); }

    Read the article

  • NoSQL with RavenDB and ASP.NET MVC - Part 1

    - by shiju
     A while back, I have blogged NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2 on how to use MongoDB with an ASP.NET MVC application. The NoSQL movement is getting big attention and RavenDB is the latest addition to the NoSQL and document database world. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed  by Ayende Rahien.  Raven stores schema-less JSON documents, allow you to define indexes using Linq queries and focus on low latency and high performance. RavenDB is .NET focused document database which comes with a fully functional .NET client API  and supports LINQ. RavenDB comes with two components, a server and a client API. RavenDB is a REST based system, so you can write your own HTTP cleint API. As a .NET developer, RavenDB is becoming my favorite document database. Unlike other document databases, RavenDB is supports transactions using System.Transactions. Also it's supports both embedded and server mode of database. You can access RavenDB site at http://ravendb.netA demo App with ASP.NET MVCLet's create a simple demo app with RavenDB and ASP.NET MVC. To work with RavenDB, do the following steps. Go to http://ravendb.net/download and download the latest build.Unzip the downloaded file.Go to the /Server directory and run the RavenDB.exe. This will start the RavenDB server listening on localhost:8080You can change the port of RavenDB  by modifying the "Raven/Port" appSetting value in the RavenDB.exe.config file.When running the RavenDB, it will automatically create a database in the /Data directory. You can change the directory name data by modifying "Raven/DataDirt" appSetting value in the RavenDB.exe.config file.RavenDB provides a browser based admin tool. When the Raven server is running, You can be access the browser based admin tool and view and edit documents and index using your browser admin tool. The web admin tool available at http://localhost:8080The below is the some screen shots of web admin tool     Working with ASP.NET MVC  To working with RavenDB in our demo ASP.NET MVC application, do the following steps Step 1 - Add reference to Raven Cleint API In our ASP.NET MVC application, Add a reference to the Raven.Client.Lightweight.dll from the Client directory. Step 2 - Create DocumentStoreThe document store would be created once per application. Let's create a DocumentStore on application start-up in the Global.asax.cs. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); The above code will create a Raven DB document store and will be listening the server locahost at port 8080    Step 3 - Create DocumentSession on BeginRequest   Let's create a DocumentSession on BeginRequest event in the Global.asax.cs. We are using the document session for every unit of work. In our demo app, every HTTP request would be a single Unit of Work (UoW). BeginRequest += (sender, args) =>   HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); Step 4 - Destroy the DocumentSession on EndRequest  EndRequest += (o, eventArgs) => {     var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable;     if (disposable != null)         disposable.Dispose(); };  At the end of HTTP request, we are destroying the DocumentSession  object.The below  code block shown all the code in the Global.asax.cs  private const string RavenSessionKey = "RavenMVC.Session"; private static DocumentStore documentStore;   protected void Application_Start() { //Create a DocumentStore in Application_Start //DocumentStore should be created once per application and stored as a singleton. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); //DI using Unity 2.0 ConfigureUnity(); }   public MvcApplication() { //Create a DocumentSession on BeginRequest   //create a document session for every unit of work BeginRequest += (sender, args) =>     HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); //Destroy the DocumentSession on EndRequest EndRequest += (o, eventArgs) => { var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable; if (disposable != null) disposable.Dispose(); }; }   //Getting the current DocumentSession public static IDocumentSession CurrentSession {   get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  We have setup all necessary code in the Global.asax.cs for working with RavenDB. For our demo app, Let’s write a domain class  public class Category {       public string Id { get; set; }       [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }   } We have created simple domain entity Category. Let's create repository class for performing CRUD operations against our domain entity Category.  public interface ICategoryRepository {     Category Load(string id);     IEnumerable<Category> GetCategories();     void Save(Category category);     void Delete(string id);       }    public class CategoryRepository : ICategoryRepository {     private IDocumentSession session;     public CategoryRepository()     {             session = MvcApplication.CurrentSession;     }     //Load category based on Id     public Category Load(string id)     {         return session.Load<Category>(id);     }     //Get all categories     public IEnumerable<Category> GetCategories()     {         var categories= session.LuceneQuery<Category>()                 .WaitForNonStaleResults()             .ToArray();         return categories;       }     //Insert/Update category     public void Save(Category category)     {         if (string.IsNullOrEmpty(category.Id))         {             //insert new record             session.Store(category);         }         else         {             //edit record             var categoryToEdit = Load(category.Id);             categoryToEdit.Name = category.Name;             categoryToEdit.Description = category.Description;         }         //save the document session         session.SaveChanges();     }     //delete a category     public void Delete(string id)     {         var category = Load(id);         session.Delete<Category>(category);         session.SaveChanges();     }        } For every CRUD operations, we are taking the current document session object from HttpContext object. session = MvcApplication.CurrentSession; We are calling the static method CurrentSession from the Global.asax.cs public static IDocumentSession CurrentSession {     get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  Retrieve Entities  The Load method get the single Category object based on the Id. RavenDB is working based on the REST principles and the Id would be like categories/1. The Id would be created by automatically when a new object is inserted to the document store. The REST uri categories/1 represents a single category object with Id representation of 1.   public Category Load(string id) {    return session.Load<Category>(id); } The GetCategories method returns all the categories calling the session.LuceneQuery method. RavenDB is using a lucen query syntax for querying. I will explain more details about querying and indexing in my future posts.   public IEnumerable<Category> GetCategories() {     var categories= session.LuceneQuery<Category>()             .WaitForNonStaleResults()         .ToArray();     return categories;   } Insert/Update entityFor insert/Update a Category entity, we have created Save method in repository class. If  the Id property of Category is null, we call Store method of Documentsession for insert a new record. For editing a existing record, we load the Category object and assign the values to the loaded Category object. The session.SaveChanges() will save the changes to document store.  //Insert/Update category public void Save(Category category) {     if (string.IsNullOrEmpty(category.Id))     {         //insert new record         session.Store(category);     }     else     {         //edit record         var categoryToEdit = Load(category.Id);         categoryToEdit.Name = category.Name;         categoryToEdit.Description = category.Description;     }     //save the document session     session.SaveChanges(); }  Delete Entity  In the Delete method, we call the document session's delete method and call the SaveChanges method to reflect changes in the document store.  public void Delete(string id) {     var category = Load(id);     session.Delete<Category>(category);     session.SaveChanges(); }  Let’s create ASP.NET MVC controller and controller actions for handling CRUD operations for the domain class Category  public class CategoryController : Controller { private ICategoryRepository categoyRepository; //DI enabled constructor public CategoryController(ICategoryRepository categoyRepository) {     this.categoyRepository = categoyRepository; } public ActionResult Index() {         var categories = categoyRepository.GetCategories();     if (categories == null)         return RedirectToAction("Create");     return View(categories); }   [HttpGet] public ActionResult Edit(string id) {     var category = categoyRepository.Load(id);         return View("Save",category); } // GET: /Category/Create [HttpGet] public ActionResult Create() {     var category = new Category();     return View("Save", category); } [HttpPost] public ActionResult Save(Category category) {     if (!ModelState.IsValid)     {         return View("Save", category);     }           categoyRepository.Save(category);         return RedirectToAction("Index");     }        [HttpPost] public ActionResult Delete(string id) {     categoyRepository.Delete(id);     var categories = categoyRepository.GetCategories();     return PartialView("CategoryList", categories);      }        }  RavenDB is an awesome document database and I hope that it will be the winner in .NET space of document database world.  The source code of demo application available at http://ravenmvc.codeplex.com/

    Read the article

  • Talks Submitted for Ann Arbor Day of .NET 2010

    - by PSteele
    Just submitted my session abstracts for Ann Arbor's Day of .NET 2010.   Getting up to speed with .NET 3.5 -- Just in time for 4.0! Yes, C# 4.0 is just around the corner.  But if you haven't had the chance to use C# 3.5 extensively, this session will start from the ground up with the new features of 3.5.  We'll assume everyone is coming from C# 2.0.  This session will show you the details of extension methods, partial methods and more.  We'll also show you how LINQ -- Language Integrated Query -- can help decrease your development time and increase your code's readability.  If time permits, we'll look at some .NET 4.0 features, but the goal is to get you up to speed on .NET 3.5.   Go Ahead and Mock Me! When testing specific parts of your application, there can be a lot of external dependencies required to make your tests work.  Writing fake or mock objects that act as stand-ins for the real dependencies can waste a lot of time.  This is where mocking frameworks come in.  In this session, Patrick Steele will introduce you to Rhino Mocks, a popular mocking framework for .NET.  You'll see how a mocking framework can make writing unit tests easier and leads to less brittle unit tests.   Inversion of Control: Who's got control and why is it being inverted? No doubt you've heard of "Inversion of Control".  If not, maybe you've heard the term "Dependency Injection"?  The two usually go hand-in-hand.  Inversion of Control (IoC) along with Dependency Injection (DI) helps simplify the connections and lifetime of all of the dependent objects in the software you write.  In this session, Patrick Steele will introduce you to the concepts of IoC and DI and will show you how to use a popular IoC container (Castle Windsor) to help simplify the way you build software and how your objects interact with each other. If you're interested in speaking, hurry up and get your submissions in!  The deadline is Monday, April 5th! Technorati Tags: .NET,Ann Arbor,Day of .NET

    Read the article

  • ASP.NET Asynchronous Pages and when to use them

    - by rajbk
    There have been several articles posted about using  asynchronous pages in ASP.NET but none of them go into detail as to when you should use them. I finally found a great post by Thomas Marquardt that explains the process in depth. He addresses a key misconception also: So, in your ASP.NET application, when should you perform work asynchronously instead of synchronously? Well, only 1 thread per CPU can execute at a time.  Did you catch that?  A lot of people seem to miss this point...only one thread executes at a time on a CPU. When you have more than this, you pay an expensive penalty--a context switch. However, if a thread is blocked waiting on work...then it makes sense to switch to another thread, one that can execute now.  It also makes sense to switch threads if you want work to be done in parallel as opposed to in series, but up until a certain point it actually makes much more sense to execute work in series, again, because of the expensive context switch. Pop quiz: If you have a thread that is doing a lot of computational work and using the CPU heavily, and this takes a while, should you switch to another thread? No! The current thread is efficiently using the CPU, so switching will only incur the cost of a context switch. Ok, well, what if you have a thread that makes an HTTP or SOAP request to another server and takes a long time, should you switch threads? Yes! You can perform the HTTP or SOAP request asynchronously, so that once the "send" has occurred, you can unwind the current thread and not use any threads until there is an I/O completion for the "receive". Between the "send" and the "receive", the remote server is busy, so locally you don't need to be blocking on a thread, but instead make use of the asynchronous APIs provided in .NET Framework so that you can unwind and be notified upon completion. Again, it only makes sense to switch threads if the benefit from doing so out weights the cost of the switch. Read more about it in these posts: Performing Asynchronous Work, or Tasks, in ASP.NET Applications http://blogs.msdn.com/tmarq/archive/2010/04/14/performing-asynchronous-work-or-tasks-in-asp-net-applications.aspx ASP.NET Thread Usage on IIS 7.0 and 6.0 http://blogs.msdn.com/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx   PS: I generally do not write posts that simply link to other posts but think it is warranted in this case.

    Read the article

  • Handling HTTP 404 Error in ASP.NET Web API

    - by imran_ku07
            Introduction:                     Building modern HTTP/RESTful/RPC services has become very easy with the new ASP.NET Web API framework. Using ASP.NET Web API framework, you can create HTTP services which can be accessed from browsers, machines, mobile devices and other clients. Developing HTTP services is now become more easy for ASP.NET MVC developer becasue ASP.NET Web API is now included in ASP.NET MVC. In addition to developing HTTP services, it is also important to return meaningful response to client if a resource(uri) not found(HTTP 404) for a reason(for example, mistyped resource uri). It is also important to make this response centralized so you can configure all of 'HTTP 404 Not Found' resource at one place. In this article, I will show you how to handle 'HTTP 404 Not Found' at one place.         Description:                     Let's say that you are developing a HTTP RESTful application using ASP.NET Web API framework. In this application you need to handle HTTP 404 errors in a centralized location. From ASP.NET Web API point of you, you need to handle these situations, No route matched. Route is matched but no {controller} has been found on route. No type with {controller} name has been found. No matching action method found in the selected controller due to no action method start with the request HTTP method verb or no action method with IActionHttpMethodProviderRoute implemented attribute found or no method with {action} name found or no method with the matching {action} name found.                                          Now, let create a ErrorController with Handle404 action method. This action method will be used in all of the above cases for sending HTTP 404 response message to the client.  public class ErrorController : ApiController { [HttpGet, HttpPost, HttpPut, HttpDelete, HttpHead, HttpOptions, AcceptVerbs("PATCH")] public HttpResponseMessage Handle404() { var responseMessage = new HttpResponseMessage(HttpStatusCode.NotFound); responseMessage.ReasonPhrase = "The requested resource is not found"; return responseMessage; } }                     You can easily change the above action method to send some other specific HTTP 404 error response. If a client of your HTTP service send a request to a resource(uri) and no route matched with this uri on server then you can route the request to the above Handle404 method using a custom route. Put this route at the very bottom of route configuration,  routes.MapHttpRoute( name: "Error404", routeTemplate: "{*url}", defaults: new { controller = "Error", action = "Handle404" } );                     Now you need handle the case when there is no {controller} in the matching route or when there is no type with {controller} name found. You can easily handle this case and route the request to the above Handle404 method using a custom IHttpControllerSelector. Here is the definition of a custom IHttpControllerSelector, public class HttpNotFoundAwareDefaultHttpControllerSelector : DefaultHttpControllerSelector { public HttpNotFoundAwareDefaultHttpControllerSelector(HttpConfiguration configuration) : base(configuration) { } public override HttpControllerDescriptor SelectController(HttpRequestMessage request) { HttpControllerDescriptor decriptor = null; try { decriptor = base.SelectController(request); } catch (HttpResponseException ex) { var code = ex.Response.StatusCode; if (code != HttpStatusCode.NotFound) throw; var routeValues = request.GetRouteData().Values; routeValues["controller"] = "Error"; routeValues["action"] = "Handle404"; decriptor = base.SelectController(request); } return decriptor; } }                     Next, it is also required to pass the request to the above Handle404 method if no matching action method found in the selected controller due to the reason discussed above. This situation can also be easily handled through a custom IHttpActionSelector. Here is the source of custom IHttpActionSelector,  public class HttpNotFoundAwareControllerActionSelector : ApiControllerActionSelector { public HttpNotFoundAwareControllerActionSelector() { } public override HttpActionDescriptor SelectAction(HttpControllerContext controllerContext) { HttpActionDescriptor decriptor = null; try { decriptor = base.SelectAction(controllerContext); } catch (HttpResponseException ex) { var code = ex.Response.StatusCode; if (code != HttpStatusCode.NotFound && code != HttpStatusCode.MethodNotAllowed) throw; var routeData = controllerContext.RouteData; routeData.Values["action"] = "Handle404"; IHttpController httpController = new ErrorController(); controllerContext.Controller = httpController; controllerContext.ControllerDescriptor = new HttpControllerDescriptor(controllerContext.Configuration, "Error", httpController.GetType()); decriptor = base.SelectAction(controllerContext); } return decriptor; } }                     Finally, we need to register the custom IHttpControllerSelector and IHttpActionSelector. Open global.asax.cs file and add these lines,  configuration.Services.Replace(typeof(IHttpControllerSelector), new HttpNotFoundAwareDefaultHttpControllerSelector(configuration)); configuration.Services.Replace(typeof(IHttpActionSelector), new HttpNotFoundAwareControllerActionSelector());         Summary:                       In addition to building an application for HTTP services, it is also important to send meaningful centralized information in response when something goes wrong, for example 'HTTP 404 Not Found' error.  In this article, I showed you how to handle 'HTTP 404 Not Found' error in a centralized location. Hopefully you will enjoy this article too.

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • The configuration section 'system.web.extensions' cannot be read because it is missing a section declaration

    - by Jeff Widmer
    After upgrading an ASP.NET application from .NET Framework 3.5 to .NET Framework 4.0 I ran into the following error message dialog when trying to view any of the modules in IIS on the server. What happened is during the upgrade, the web.config file was automatically converted for .NET Framework 4.0.  This left behind an empty system.web.extensions section: It was an easy fix to resolve, just remove the unused system.web.extensions section.   ERROR DIALOG TEXT: --------------------------- ASP --------------------------- There was an error while performing this operation. Details: Filename: \\?\web.config Line number: 171 Error: The configuration section 'system.web.extensions' cannot be read because it is missing a section declaration --------------------------- OK   ---------------------------

    Read the article

  • Converting a generic list into JSON string and then handling it in java script

    - by Jalpesh P. Vadgama
    We all know that JSON (JavaScript Object Notification) is very useful in case of manipulating string on client side with java script and its performance is very good over browsers so let’s create a simple example where convert a Generic List then we will convert this list into JSON string and then we will call this web service from java script and will handle in java script. To do this we need a info class(Type) and for that class we are going to create generic list. Here is code for that I have created simple class with two properties UserId and UserName public class UserInfo { public int UserId { get; set; } public string UserName { get; set; } } Now Let’s create a web service and web method will create a class and then we will convert this with in JSON string with JavaScriptSerializer class. Here is web service class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; namespace Experiment.WebService { /// <summary> /// Summary description for WsApplicationUser /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. [System.Web.Script.Services.ScriptService] public class WsApplicationUser : System.Web.Services.WebService { [WebMethod] public string GetUserList() { List<UserInfo> userList = new List<UserInfo>(); for (int i = 1; i <= 5; i++) { UserInfo userInfo = new UserInfo(); userInfo.UserId = i; userInfo.UserName = string.Format("{0}{1}", "J", i.ToString()); userList.Add(userInfo); } System.Web.Script.Serialization.JavaScriptSerializer jSearializer = new System.Web.Script.Serialization.JavaScriptSerializer(); return jSearializer.Serialize(userList); } } } Note: Here you must have this attribute here in web service class ‘[System.Web.Script.Services.ScriptService]’ as this attribute will enable web service to call from client side. Now we have created a web service class let’s create a java script function ‘GetUserList’ which will call web service from JavaScript like following function GetUserList() { Experiment.WebService.WsApplicationUser.GetUserList(ReuqestCompleteCallback, RequestFailedCallback); } After as you can see we have inserted two call back function ReuqestCompleteCallback and RequestFailedCallback which handle errors and result from web service. ReuqestCompleteCallback will handle result of web service and if and error comes then RequestFailedCallback will print the error. Following is code for both function. function ReuqestCompleteCallback(result) { result = eval(result); var divResult = document.getElementById("divUserList"); CreateUserListTable(result); } function RequestFailedCallback(error) { var stackTrace = error.get_stackTrace(); var message = error.get_message(); var statusCode = error.get_statusCode(); var exceptionType = error.get_exceptionType(); var timedout = error.get_timedOut(); // Display the error. var divResult = document.getElementById("divUserList"); divResult.innerHTML = "Stack Trace: " + stackTrace + "<br/>" + "Service Error: " + message + "<br/>" + "Status Code: " + statusCode + "<br/>" + "Exception Type: " + exceptionType + "<br/>" + "Timedout: " + timedout; } Here in above there is a function called you can see that we have use ‘eval’ function which parse string in enumerable form. Then we are calling a function call ‘CreateUserListTable’ which will create a table string and paste string in the a div. Here is code for that function. function CreateUserListTable(userList) { var tablestring = '<table ><tr><td>UsreID</td><td>UserName</td></tr>'; for (var i = 0, len = userList.length; i < len; ++i) { tablestring=tablestring + "<tr>"; tablestring=tablestring + "<td>" + userList[i].UserId + "</td>"; tablestring=tablestring + "<td>" + userList[i].UserName + "</td>"; tablestring=tablestring + "</tr>"; } tablestring = tablestring + "</table>"; var divResult = document.getElementById("divUserList"); divResult.innerHTML = tablestring; } Now let’s create div which will have all html that is generated from this function. Here is code of my web page. We also need to add a script reference to enable web service from client side. Here is all HTML code we have. <form id="form1" runat="server"> <asp:ScriptManager ID="myScirptManger" runat="Server"> <Services> <asp:ServiceReference Path="~/WebService/WsApplicationUser.asmx" /> </Services> </asp:ScriptManager> <div id="divUserList"> </div> </form> Now as we have not defined where we are going to call ‘GetUserList’ function so let’s call this function on windows onload event of javascript like following. window.onload=GetUserList(); That’s it. Now let’s run it on browser to see whether it’s work or not and here is the output in browser as expected. That’s it. This was very basic example but you can crate your own JavaScript enabled grid from this and you can see possibilities are unlimited here. Stay tuned for more.. Happy programming.. Technorati Tags: JSON,Javascript,ASP.NET,WebService

    Read the article

  • Course/Points to include in a educational session on Asp.net MVC 4 to be given to office colleagues

    - by bhuvin
    I am planning to take a educational session on ASP.NET MVC , now in this i am confused what all to include. Actually in office there are very less people who know about it, and are sort of closed to it. So want to take a session over it to give them a "Tip of the Iceberg". Now I want some suggestions to include into the session.And its just a 1 hour session. Dont wanna go about loading nitty gritty details. Just want to make them curious. So want some such content which amazes them. For eg : Catering same code for different devices like for mobiles.

    Read the article

  • WCF Ria Services Error : Load operation failed for query 'GetTranslationProgress'

    - by Manoj
    Hello, I am using WCF Ria Services beta in my silverlight application. I have a long running task on the server which keeps writing its status to a database table. I have a "GetTranslationProgress" Ria Service Query method which queries the state to get the status. This query method is called continuously at a regular interval of 5 seconds until a status = Finished or Error is retrieved. In some cases if the task on the server is running for a long duration 2-3 minutes then I receives this error:- Load operation failed for query 'GetTranslationProgress'. The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. at System.Windows.Ria.OperationBase.Complete(Exception error) at System.Windows.Ria.LoadOperation.Complete(Exception error) at System.Windows.Ria.DomainContext.CompleteLoad(IAsyncResult asyncResult) at System.Windows.Ria.DomainContext.<c_DisplayClass17.b_13(Object ) The error occurs intermittently and I have no idea what could be causing this error. I have checked most of the forums and sites but I have not been able to find out any solution to this issue. Please help.

    Read the article

  • WCF RIA Silverlight deployment issues

    - by Handleman
    It seems the world is awash with people having problems deploying RIA WCF services, and now I'm one too. I've already tried a bunch of things, but to no avail. I need WCF RIA to support a Silverlight 3 application I've built. The short story is, using the new WCF RIA services (Nov 09?) I open VS 2008, create new project (silverlight application), enabling ".NET RIA services". Add new item to web project - Linq2SQL dbml file (from SQL 2005 DB prepared earlier) and compile. I add a new item to the web project - domain service (link the tables I need) and compiled. Using the domain context I "Load" data with a standard RIA get query in the MainPage and add a TextBlock to display returned data. Build & run (cassini) - success. Using VS to publish to IIS on local PC - success. Using VS to publish to test server (IIS6) - browse to location and the Silverlight app loads but Fiddler tells me I've got a 404 on all the the WCF .svc requests. Use Fiddler to "launch IE" on the service request and it's true - 404. I have already run aspnet_regiis, ServiceModelReg and added mime types for .xap, .xaml, .xbap and .svc. I have included the System.Web.Ria and System.Web.DomainServices DLL with copy local true. I need help with either a) a solution b) an approach to find a solution

    Read the article

  • Launch Reporting Services Reports from .Net Code

    - by Jeff
    What is the best way to launch reporting services reports from .Net code? One method would be to dynamically build a URL and launch a browser. Something like this: http://server/ReportServer/Pages/ReportViewer.aspx?%2fReport+Directory%2fReport%20Name&FirstParameter=1,2,3&SecondParameter=8/30/2009&rs%3aCommand=Render I don't like how it creates a dependency on the specific URL--especially report parameters which are very likely to change. Is there a better way? The reports I want to link to are in several reporting services projects hosted on one (eventually two) servers.

    Read the article

  • Class-Level Model Validation with EF Code First and ASP.NET MVC 3

    - by ScottGu
    Earlier this week the data team released the CTP5 build of the new Entity Framework Code-First library.  In my blog post a few days ago I talked about a few of the improvements introduced with the new CTP5 build.  Automatic support for enforcing DataAnnotation validation attributes on models was one of the improvements I discussed.  It provides a pretty easy way to enable property-level validation logic within your model layer. You can apply validation attributes like [Required], [Range], and [RegularExpression] – all of which are built-into .NET 4 – to your model classes in order to enforce that the model properties are valid before they are persisted to a database.  You can also create your own custom validation attributes (like this cool [CreditCard] validator) and have them be automatically enforced by EF Code First as well.  This provides a really easy way to validate property values on your models.  I showed some code samples of this in action in my previous post. Class-Level Model Validation using IValidatableObject DataAnnotation attributes provides an easy way to validate individual property values on your model classes.  Several people have asked - “Does EF Code First also support a way to implement class-level validation methods on model objects, for validation rules than need to span multiple property values?”  It does – and one easy way you can enable this is by implementing the IValidatableObject interface on your model classes. IValidatableObject.Validate() Method Below is an example of using the IValidatableObject interface (which is built-into .NET 4 within the System.ComponentModel.DataAnnotations namespace) to implement two custom validation rules on a Product model class.  The two rules ensure that: New units can’t be ordered if the Product is in a discontinued state New units can’t be ordered if there are already more than 100 units in stock We will enforce these business rules by implementing the IValidatableObject interface on our Product class, and by implementing its Validate() method like so: The IValidatableObject.Validate() method can apply validation rules that span across multiple properties, and can yield back multiple validation errors. Each ValidationResult returned can supply both an error message as well as an optional list of property names that caused the violation (which is useful when displaying error messages within UI). Automatic Validation Enforcement EF Code-First (starting with CTP5) now automatically invokes the Validate() method when a model object that implements the IValidatableObject interface is saved.  You do not need to write any code to cause this to happen – this support is now enabled by default. This new support means that the below code – which violates one of our above business rules – will automatically throw an exception (and abort the transaction) when we call the “SaveChanges()” method on our Northwind DbContext: In addition to reactively handling validation exceptions, EF Code First also allows you to proactively check for validation errors.  Starting with CTP5, you can call the “GetValidationErrors()” method on the DbContext base class to retrieve a list of validation errors within the model objects you are working with.  GetValidationErrors() will return a list of all validation errors – regardless of whether they are generated via DataAnnotation attributes or by an IValidatableObject.Validate() implementation.  Below is an example of proactively using the GetValidationErrors() method to check (and handle) errors before trying to call SaveChanges(): ASP.NET MVC 3 and IValidatableObject ASP.NET MVC 2 included support for automatically honoring and enforcing DataAnnotation attributes on model objects that are used with ASP.NET MVC’s model binding infrastructure.  ASP.NET MVC 3 goes further and also honors the IValidatableObject interface.  This combined support for model validation makes it easy to display appropriate error messages within forms when validation errors occur.  To see this in action, let’s consider a simple Create form that allows users to create a new Product: We can implement the above Create functionality using a ProductsController class that has two “Create” action methods like below: The first Create() method implements a version of the /Products/Create URL that handles HTTP-GET requests - and displays the HTML form to fill-out.  The second Create() method implements a version of the /Products/Create URL that handles HTTP-POST requests - and which takes the posted form data, ensures that is is valid, and if it is valid saves it in the database.  If there are validation issues it redisplays the form with the posted values.  The razor view template of our “Create” view (which renders the form) looks like below: One of the nice things about the above Controller + View implementation is that we did not write any validation logic within it.  The validation logic and business rules are instead implemented entirely within our model layer, and the ProductsController simply checks whether it is valid (by calling the ModelState.IsValid helper method) to determine whether to try and save the changes or redisplay the form with errors. The Html.ValidationMessageFor() helper method calls within our view simply display the error messages our Product model’s DataAnnotations and IValidatableObject.Validate() method returned.  We can see the above scenario in action by filling out invalid data within the form and attempting to submit it: Notice above how when we hit the “Create” button we got an error message.  This was because we ticked the “Discontinued” checkbox while also entering a value for the UnitsOnOrder (and so violated one of our business rules).  You might ask – how did ASP.NET MVC know to highlight and display the error message next to the UnitsOnOrder textbox?  It did this because ASP.NET MVC 3 now honors the IValidatableObject interface when performing model binding, and will retrieve the error messages from validation failures with it. The business rule within our Product model class indicated that the “UnitsOnOrder” property should be highlighted when the business rule we hit was violated: Our Html.ValidationMessageFor() helper method knew to display the business rule error message (next to the UnitsOnOrder edit box) because of the above property name hint we supplied: Keeping things DRY ASP.NET MVC and EF Code First enables you to keep your validation and business rules in one place (within your model layer), and avoid having it creep into your Controllers and Views.  Keeping the validation logic in the model layer helps ensure that you do not duplicate validation/business logic as you add more Controllers and Views to your application.  It allows you to quickly change your business rules/validation logic in one single place (within your model layer) – and have all controllers/views across your application immediately reflect it.  This help keep your application code clean and easily maintainable, and makes it much easier to evolve and update your application in the future. Summary EF Code First (starting with CTP5) now has built-in support for both DataAnnotations and the IValidatableObject interface.  This allows you to easily add validation and business rules to your models, and have EF automatically ensure that they are enforced anytime someone tries to persist changes of them to a database.  ASP.NET MVC 3 also now supports both DataAnnotations and IValidatableObject as well, which makes it even easier to use them with your EF Code First model layer – and then have the controllers/views within your web layer automatically honor and support them as well.  This makes it easy to build clean and highly maintainable applications. You don’t have to use DataAnnotations or IValidatableObject to perform your validation/business logic.  You can always roll your own custom validation architecture and/or use other more advanced validation frameworks/patterns if you want.  But for a lot of applications this built-in support will probably be sufficient – and provide a highly productive way to build solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Parallelism in .NET – Introduction

    - by Reed
    Parallel programming is something that every professional developer should understand, but is rarely discussed or taught in detail in a formal manner.  Software users are no longer content with applications that lock up the user interface regularly, or take large amounts of time to process data unnecessarily.  Modern development requires the use of parallelism.  There is no longer any excuses for us as developers. Learning to write parallel software is challenging.  It requires more than reading that one chapter on parallelism in our programming language book of choice… Today’s systems are no longer getting faster with each generation; in many cases, newer computers are actually slower than previous generation systems.  Modern hardware is shifting towards conservation of power, with processing scalability coming from having multiple computer cores, not faster and faster CPUs.  Our CPU frequencies no longer double on a regular basis, but Moore’s Law is still holding strong.  Now, however, instead of scaling transistors in order to make processors faster, hardware manufacturers are scaling the transistors in order to add more discrete hardware processing threads to the system. This changes how we should think about software.  In order to take advantage of modern systems, we need to redesign and rewrite our algorithms to work in parallel.  As with any design domain, it helps tremendously to have a common language, as well as a common set of patterns and tools. For .NET developers, this is an exciting time for parallel programming.  Version 4 of the .NET Framework is adding the Task Parallel Library.  This has been back-ported to .NET 3.5sp1 as part of the Reactive Extensions for .NET, and is available for use today in both .NET 3.5 and .NET 4.0 beta. In order to fully utilize the Task Parallel Library and parallelism, both in .NET 4 and previous versions, we need to understand the proper terminology.  For this series, I will provide an introduction to some of the basic concepts in parallelism, and relate them to the tools available in .NET.

    Read the article

  • .Net Intermittent System.Web.Services.Protocols.SoapHeaderException

    - by ScottE
    We have a .net 3.5 web app that consumes third party web services. The proxy was created by adding a web reference to their wsdl. This proxy is not compiled. Our error logging is picking up frequent but intermittent exceptions: An exception of type 'System.Web.Services.Protocols.SoapHeaderException' occurred and was caught If I follow the url to the page that generated the exception, I can't recreate it. Edit: Here is most of the exception - where it bubbled up from Message : Internal Error Type : System.Web.Services.Protocols.SoapHeaderException, System.Web.Services, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Source : System.Web.Services Help link : Actor : Code : http://schemas.xmlsoap.org/soap/envelope/:Client Detail : Lang : Node : Role : SubCode : Data : System.Collections.ListDictionaryInternal TargetSite : System.Object[] ReadResponse(System.Web.Services.Protocols.SoapClientMessage, System.Net.WebResponse, System.IO.Stream, Boolean) Stack Trace : at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Vendor.getSearch(getSearchRequest getSearchRequest) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\be43c34e\b09edc7e\App_WebReferences.pww-cf-q.0.cs:line 73 Edit 2: Inner exceptions: I sometimes get the following inner exceptions logged: Message : Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. Type : System.IO.IOException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Read(Byte[], Int32, Int32) Stack Trace : at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) And/Or: Message : An existing connection was forcibly closed by the remote host Type : System.Net.Sockets.SocketException, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : ErrorCode : 10054 SocketErrorCode : ConnectionReset NativeErrorCode : 10054 Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Receive(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags) Stack Trace : at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) Update We're still working on it. Originally there was a route issue, which was resolved. We're still getting the inner exception with socket errors. We had MS support involved today, and they looked at some traces and network captures. The web service host does round-robin DNS, and they may be responding on a different IP address for the syn syn/ack from one ip, and the next from a different ip. This is not good. This is likely quite specific to our situation, but perhaps it applies to others as well. Microsoft Network Monitor and an application trace got us the information we needed.

    Read the article

  • Migrated SCOM 2007 R2 Reporting Services but reports are gone

    - by Gabriel Guimarães
    I've migrated Reporting Services on a SCOM 2007 R2 install, and noticed that the reports have not being copied. I can create a new report, but the ones I've had because of the management packs are gone. I've tried re-applying the Management Packs however it doesn't re-deploy them and when I try to access for example: Monitoring - Microsoft Windows Print Server - Microsoft Windows Server 2000 and 2003 Print Services - State View - select any item and click Alerts on the right menu. I get the following error: Date: 12/24/2010 12:40:35 PM Application: System Center Operations Manager 2007 R2 Application Version: 6.1.7221.0 Severity: Error Message: Cannot initialize report. Microsoft.Reporting.WinForms.ReportServerException: The item '/Microsoft.SystemCenter.DataWarehouse.Report.Library/Microsoft.SystemCenter.DataWarehouse.Report.Alert' cannot be found. (rsItemNotFound) at Microsoft.Reporting.WinForms.ServerReport.GetExecutionInfo() at Microsoft.Reporting.WinForms.ServerReport.GetParameters() at Microsoft.EnterpriseManagement.Mom.Internal.UI.Reporting.Parameters.ReportParameterBlock.Initialize(ServerReport serverReport) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Console.ReportForm.SetReportJob(Object sender, ConsoleJobEventArgs args) The report doesn't exist on the reporting services side. how do I re-deploy this reports? Any help is appreciated. Thanks in advance.

    Read the article

  • On Her Majesty's Secret Source Code: .NET Reflector 7 Early Access Builds Now Available

    - by Bart Read
    Dodgy Bond references aside, I'm extremely happy to be able to tell you that we've just released our first .NET Reflector 7 Early Access build. We're going to make these available over the coming weeks via the main .NET Reflector download page at: http://reflector.red-gate.com/Download.aspx Please have a play and tell us what you think in the forum we've set up. Also, please let us know if you run into any problems in the same place. The new version so far comes with numerous decompilation improvements including (after 5 years!) support for iterator blocks - i.e., the yield statement first seen in .NET 2.0. We've also done a lot of work to solidify the support for .NET 4.0. Clive's written about the work he's done to support iterator blocks in much more detail here, along with the odd problem he's encountered when dealing with compiler generated code: http://www.simple-talk.com/community/blogs/clivet/96199.aspx. On the UI front we've started what will ultimately be a rewrite of the entire front-end, albeit broken into stages over two or three major releases. The most obvious addition at the moment is tabbed browsing, which you can see in Figure 1. Figure 1. .NET Reflector's new tabbed decompilation feature. Use CTRL+Click on any item in the assembly browser tree, or any link in the source code view, to open it in a new tab. This isn't by any means finished. I'll be tying up loose ends for the next few weeks, with a major focus on performance and resource usage. .NET Reflector has historically been a largely single-threaded application which has been fine up until now but, as you might expect, the addition of browser-style tabbing has pushed this approach somewhat beyond its limit. You can see this if you refresh the assemblies list by hitting F5. This shows up another problem: we really need to make Reflector remember everything you had open before you refreshed the list, rather than just the last item you viewed - I discovered that it's always done the latter, but it used to hide all panes apart from the treeview after a Refresh, including the decompiler/disassembler window. Ultimately I've got plans to add the whole VS/Chrome/Firefox style ability to drag a tab into the middle of nowhere to spawn a new window, but I need to be mindful of the add-ins, amongst other things, so it's possible that might slip to a 7.5 or 8.0 release. You'll also notice that .NET Reflector 7 now needs .NET 3.5 or later to run. We made this jump because we wanted to offer ourselves a much better chance of adding some really cool functionality to support newer technologies, such as Silverlight and Windows Phone 7. We've also taken the opportunity to start using WPF for UI development, which has frankly been a godsend. The learning curve is practically vertical but, I kid you not, it's just a far better world. Really. Stop using WinForms. Now. Why are you still using it? I had to go back and work on an old WinForms dialog for an hour or two yesterday and it really made me wince. The point is we'll be able to move the UI in some exciting new directions that will make Reflector easier to use whilst continuing to develop its functionality without (and this is key) cluttering the interface. The 3.5 language enhancements should also enable us to be much more productive over the longer term. I know most of you have .NET Fx 3.5 or 4.0 already but, if you do need to install a new version, I'd recommend you jump straight to 4.0 because, for one thing, it's faster, and if you're starting afresh there's really no reason not to. Despite the Fx version jump the Visual Studio add-in should still work fine in Visual Studio 2005, and obviously will continue to work in Visual Studio 2008 and 2010. If you do run into problems, again, please let us know here. As before, we continue to support every edition of Visual Studio exception the Express Editions. Speaking of Visual Studio, we've also been improving the add-in. You can now open and explore decompiled code for any referenced assembly in any project in your solution. Just right-click on the reference, then click Decompile and Explore on the context menu. Reflector will pop up a progress box whilst it decompiles your assembly (Figure 2) - you can move this out of the way whilst you carry on working. Figure 2. Decompilation progress. This isn't modal so you can just move it out of the way and carry on working. Once it's done you can explore your assembly in the Reflector treeview (Figure 3), also accessible via the .NET Reflector Explore Decompiled Assemblies main menu item. Double-click on any item to open decompiled source in the Visual Studio source code view. Use right-click and Go To Definition on the source view context menu to navigate through the code. Figure 3. Using the .NET Reflector treeview within Visual Studio. Double-click on any item to open decompiled source in the source code view. There are loads of other changes and fixes that have gone in, often under the hood, which I don't have room to talk about here, and plenty more to come over the next few weeks. I'll try to keep you abreast of new functionality and changes as they go in. There are a couple of smaller things worth mentioning now though. Firstly, we've reorganised the menus and toolbar in Reflector itself to more closely mirror what you might be used to in other applications. Secondly, we've tried to make some of the functionality more discoverable. For example, you can now switch decompilation target framework version directly from the toolbar - and the default is now .NET 4.0. I think that about covers it for the moment. As I said, please use the new version, and send us your feedback. Here's that download URL again: http://reflector.red-gate.com/Download.aspx. Until next time! Technorati Tags: .net reflector,7,early access,new version,decompilation,tabbing,visual studio,software development,.net,c#,vb

    Read the article

  • Receive XML via POST with ASP.NET

    - by Mark Hurd
    I have to set up an XML "web service" that receives a POST where the 'Content-type header will specify “text/xml”.' What is the simplest way to get the XML into an XDocument for access by VB.NET's axis queries? I don't believe the web service is guaranteed to follow any protocol (e.g. SOAP, etc); just specific tags and sub-tags for various requests, and it will use Basic Authentication, so I will have to process the headers. (If it matters: * the live version will use HTTPS, and * the response will also be XML.)

    Read the article

  • Asp.net override Membership settings at runtime (asp.net mvc)

    - by minal
    I had an application that hooked onto 1 single database. The app now needs to hook into multiple databases. What we want to do is, using the same application/domain/hostname/virtual dir give the user the option on the login screen to select the "App/Database" they want to connect into. Each database has the App tables/data/procs/etc as well as the aspnet membership/roles stuff. When the user enters the username/password and selects (select list) the application, I want to validate the user against the selected applications database. Presently the database connection string for membership services is saved in the web.config. Is there any way I can override this at login time? Also, I need the "remember me" function to work smoothly as well. How does this work when the user comes back to the app in 5 hours... This process should be able to identify the user and application and log in appropriately.

    Read the article

  • ASP.NET MVC 2 Released

    - by Latest Microsoft Blogs
    I’m happy to announce that the final release of ASP.NET MVC 2 is now available for VS 2008/Visual Web Developer 2008 Express with ASP.NET 3.5.  You can download and install it from the following locations: Download ASP.NET MVC 2 using the Microsoft Read More......(read more)

    Read the article

  • Daily tech links for .net and related technologies - Apr 26-28, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Apr 26-28, 2010 Web Development MVC: Unit Testing Action Filters - Donn ASP.NET MVC 2: Ninja Black Belt Tips - Scott Hanselman Turn on Compile-time View Checking for ASP.NET MVC Projects in TFS Build 2010 - Jim Lamb Web Design List of 25+ New tags introduced in HTML 5 - techfreakstuff 15 CSS Habits to Develop for Frustration-Free Coding - noupe Silverlight, WPF & RIA Essential Silverlight and WPF Skills: The UI Thread, Dispatchers, Background...(read more)

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >