Search Results

Search found 108959 results on 4359 pages for 'ado net data services'.

Page 191/4359 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Exchange Web Services, try to use ExchangeImpersonationType ...

    - by howmanytimes
    I am trying to use EWS, first time trying to use the ExchangeServiceBinding. The code I am using is below: _service = new ExchangeServiceBinding(); //_service.Credentials = new NetworkCredential(userName, userPassword, this.Domain); _service.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials; _service.Url = this.ServiceURL; ExchangeImpersonationType ei = new ExchangeImpersonationType(); ConnectingSIDType sid = new ConnectingSIDType(); sid.PrimarySmtpAddress = this.ExchangeAccount; ei.ConnectingSID = sid; _service.ExchangeImpersonation = ei; The application is an aspnet 3.5 trying to create a task using EWS. I have tried to use impersonation because I will not know the logon user's domain password, so I thought impersonation would be the best fit. Any thoughts on how I can utilize impersonation? Am I setting this correctly, I get an error while trying to run my application. I also tried without impersonation just to try to see if I can create a task, no luck either. Any help would be appreciated. Thanks.

    Read the article

  • Do I really need an ORM?

    - by alchemical
    We're about to begin development on a mid-size ASP.Net MVC 2 web site. For a typical page, we grab data and throw it up on the web page, i.e. there is not much pre-processing of the data before it is sent to the UI. We're now making the decision whether or not to use an ORM and if yes, which one. We had been looking at EF2 AKA EF4 (ASP.Net Entity Framework in VS 2010) as one possibility. However, I'm thinking a simple solution in this case may be just to use datatables. The reason being that we don't plan to move the data around or process it a lot once we fetch it, so I'm not sure there is that much value in having strongly-typed objects as DTOs. Also, this way we avoid mapping altogether, thereby I think simplifying the code and allowing for faster development. I should mention budget is an issue on this project, as well as speed of execution. We are striving for simplicity anywhere we can, both to keep the budget smaller, the schedule shorter, and performance fast. We haven't fully decided this yet, but are currently leaning towards no ORM. Will we be OK with the no ORM approach or is an ORM worth it?

    Read the article

  • Entity Framework Model with inheritance and RIA Services

    - by TimothyP
    Hi, We have an entity framework model with has some inheritance in it. The following example is not the actuall model, but just to make my point... Let's say Base class: Person Child classes: Employee, Customer The database has been generated, the DomainService has been created and we can get to the data: lstCustomers.ItemsSource = context.Persons; EntityQuery<Person> query = context.GetPeopleQuery().Take(4); context.Load(query); But how can I modify the query to only return Customers ?

    Read the article

  • C# exception when calling stored procedure: ORA-01460 - unimplemented or unreasonable conversion req

    - by Taylor L
    I'm trying to call a stored procedure using ADO .NET and I'm getting the following error: ORA-01460 - unimplemented or unreasonable conversion requested The stored procedure I'm trying to call has the following parameters: param1 IN VARCHAR2, param2 IN NUMBER, param3 IN VARCHAR2, param4 OUT NUMBER, param5 OUT NUMBER, param6 OUT NUMBER, param7 OUT VARCHAR2 Below is the C# code I'm using to call the stored procedure: OracleCommand command = connection.CreateCommand(); command.CommandType = CommandType.StoredProcedure; command.CommandText = "MY_PROC"; OracleParameter param1 = new OracleParameter() { ParameterName = "param1", Direction = ParameterDirection.Input, Value = p1, OracleDbType = OracleDbType.Varchar2, Size = p1.Length }; OracleParameter param2 = new OracleParameter() { ParameterName = "param2", Direction = ParameterDirection.Input, Value = p2, OracleDbType = OracleDbType.Decimal }; OracleParameter param3 = new OracleParameter() { ParameterName = "param3", Direction = ParameterDirection.Input, Value = p3, OracleDbType = OracleDbType.Varchar2, Size = p3.Length }; OracleParameter param4 = new OracleParameter() { ParameterName = "param4", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal }; OracleParameter param5 = new OracleParameter() { ParameterName = "param5", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal}; OracleParameter param6 = new OracleParameter() { ParameterName = "param6", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal }; OracleParameter param7 = new OracleParameter() { ParameterName = "param7", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Varchar2, Size = 32767 }; command.Parameters.Add(param1); command.Parameters.Add(param2); command.Parameters.Add(param3); command.Parameters.Add(param4); command.Parameters.Add(param5); command.Parameters.Add(param6); command.Parameters.Add(param7); command.ExecuteNonQuery(); Any ideas what I'm doing wrong?

    Read the article

  • Tutorials for .NET database app

    - by ChrisC
    My earlier question and comments are at "What is ADO.NET". This shows my level of knowledge about c# database apps. Can someone point me to tutorials and/or primers that can help me proceed from where I am (also stated in other question)? I've looked and all I've found are tutorials that talk about general db basics (ie, not helping with VS/C#), or talk about connecting to existing SQL db's. I need help setting one up and configuring it, as well as help on how to create and use queries to test the db schema.

    Read the article

  • How do I do client-side validation in WPF using WCF RIA Services

    - by lsb
    Hi! I've created a WCF RIA Service that I'd like to use with a WPF application. I've added several System.ComponentModel.DataAnnotations validation rules on the entities meta-data, all which work great on the server when I call .SubmitChanges(changeSet) from the client. I'd also like to validate my entities on the client side before I sumbit my changes to the server but I have no idea how to do so. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • RESTful services and update operations

    - by Igor Brejc
    I know REST is meant to be resource-oriented, which roughly translates to CRUD operations on these resources using standard HTTP methods. But what I just wanted to update a part of a resource? For example, let's say I have Payment resource and I wanted to mark its status as "paid". I don't want to POST the whole Payment object through HTTP (sometimes I don't even have all the data). What would be the RESTful way of doing this? I've seen that Twitter uses the following approach for updating Twitter statuses: http://api.twitter.com/1/statuses/update.xml?status=playing with cURL and the Twitter API Is this approach in "the spirit" of REST? UPDATE: PUT - POST Some links I found in the meantime: PUT or POST: The REST of the Story PUT is not UPDATE PATCH Method for HTTP

    Read the article

  • How to pass parameters to report model in Reporting Services

    - by savras
    I'm developing report in RS that show top N customers based on some criteria. It also allows to select number of customers and period of time. Is it possible to do it by using report model? Thing that it seems to be difficult is how to pass parameters determined by user. Another thing that in my oppinion is very disappointing is that i cannot use SQL query as dataset query, because it uses odd and elaborate XML. Although report model items seem to map its fields to query or table fields. I m concerning using report models because i need to provide uniform data model (the same tables and fields) for more or less different database schemas. It would be very nice if somebody would explain what can be done with report models and what can not.

    Read the article

  • Problem using SQLDataReader with Sybase ASE

    - by John K.
    We're developing a reporting application that uses asp.net-mvc (.net 4). We connect through DDTEK.Sybase middleware to a Sybase ASE 12.5 database. We're having a problem pulling data into a datareader (from a stored procedure). The stored procedure computes values (approximately 50 columns) by doing sums, counts, and calling other stored procedures. The problem we're experiencing is... certain (maybe 5% of the columns) come back with NULL or 0. If we debug and copy the SQL statement being used for the datareader and run it inside another SQL tool we get all valid values for all columns. conn = new SybaseConnection { ConnectionString = ConfigurationManager.ConnectionStrings[ConnectStringName].ToString() }; conn.Open(); cmd = new SybaseCommand { CommandTimeout = cmdTimeout, Connection = conn, CommandText = mainSql }; reader = cmd.ExecuteReader(); // AT THIS POINT IMMEDIATELY AFTER THE EXECUTEREADER COMMAND // THE READER CONTAINS THE BAD (NULL OR 0) DATA FOR THESE COLUMNS. DataTable schemaTable = reader.GetSchemaTable(); // AT THIS POINT WE CAN VIEW THE DATATABLE FOR THE SCHEMA AND IT APPEARS CORRECT // THE COLUMNS THAT DON'T WORK HAVE SPECIFICATIONS IDENTICAL TO THE COLUMNS THAT DO WORK Has anyone had problems like this using Sybase and ADO? Thanks, John K.

    Read the article

  • Getting the data inside the C# web service from Jsonified string

    - by gnomixa
    In my JS I use Jquery's $ajax functions to call the web service and send the jsonified data to it. The data is in the following format: var countries = { "1A": { id: "1A", name: "Andorra" }, "2B": { id: 2B name: "Belgium" }, ..etc }; var jsonData = JSON.stringify({ data: data }); //then $ajax is called and it makes the call to the c# web service On the c# side the web service needs to unpack this data, currently it comes in as string[][] data type. How do I convert it to the format so I can refer to the properties such as .id and .name? Assuming I have a class called Sample with these properties? Thanks! EDIT: Here is my JS code: var jsonData = JSON.stringify(countries); $.ajax({ type: 'POST', url: 'http://localhost/MyService.asmx/Foo', contentType: 'application/json; charset=utf-8', data: jsonData, success: function (msg) { alert(msg.d); }, error: function (xhr, status) { switch (status) { case 404: alert('File not found'); break; case 500: alert('Server error'); break; case 0: alert('Request aborted'); break; default: alert('Unknown error ' + status); } } }); inside c# web service I have: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; using System.Data; using System.Collections; using System.IO; using System.Web.Script.Services; [WebMethod] [ScriptMethod] public string Foo(IDictionary<string, Country> countries) { return "success"; }

    Read the article

  • How Can I Find What's Causing My Transaction to Get Promoted?

    - by Damian Powell
    I have web site which serves web services (a mixture of .asmx and WCF) which is mostly using LINQ to SQL and System.Transactions. Occaisionally we see the transaction get promoted to a distributed transaction which causes problems because our web servers are isolated from our databases in such a way that it is not possible for us to use MSDTC. I have configured tracing for System.Transactions by adding the following to my web.config: <system.diagnostics> <sources> <source name="System.Transactions" switchValue="Information"> <listeners> <add name="tx" type="System.Diagnostics.XmlWriterTraceListener" initializeData="tx.log" /> </listeners> </source> </sources> </system.diagnostics> It's very interesting and shows me when the transaction is promoted, but I find that it doesn't really help be discover why. Is there an equivalent tracing mechanism for ADO.NET that will show me when connections are created, including the variables that affect pooling (user, cnn string, transaction scope)?

    Read the article

  • Strategies for Synchronizing Data Between a Rails App and iPhone App

    - by jessecurry
    I've written many iPhone Applications that have pulled data from web services and I've worked on synchronizing data between an iPhone App and a Web Application, but I've always felt that there is probably a better way to handle the synchronization. I'd like to know what strategies you have used to synchronize data between your iPhone(read: mobile) Apps and your Rails(read: web) Applications. Are there any strategies that scale particularly well? How have you dealt with large amounts of data? (Do you use paged responses?) How do you make sure that data is not overwritten? Is there a reason to avoid Ruby on Rails? if so, can you suggest an alternative? What is better about the alternative? What strategies have failed? Why do you believe that those strategies failed? I would like to be able to keep all of the data modifications on the server, but the particular application I am about to start work on will need the ability to operate while disconnected from the network. The user will be able to update data on the mobile device and update data through the web application. When the user's mobile device connects to the server any local changes will be pushed to the server.

    Read the article

  • How can you get the average of the count of records in Reporting Services

    - by Tim Coker
    I have a report where I'm counting the number of records for a day. At the bottom, I have the total records for that time period. I'd like to display the average records for each day. An example: Day Tests 1/1/2010 5 1/2/2010 10 1/3/2010 8 Average 7.7 Total 23 I can accomplish this by aggregating the data by date in SQL then just doing the Average and Sum of the count from SQL, but that would complicate the report considerably as I'm doing other column filters for the types of tests performed. There seems to be a simple way to accomplish this that I'm missing. I can't do Average(Count(field)) in SSRS, unfortunately. Is there something I'm missing, or is there really not a way to accomplish this simply?

    Read the article

  • System.Net.WebException: The remote server returned an error: (405) Method Not Allowed .exception occurred during the execution of the web request

    - by user88
    When I ran my web application code I got this error on this line. using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()){} Actually when I ran my url directly on browser.It will give proper o/p but when I ran my url in code. It will give exception. Here MyCode is :- string service = "http://api.ean.com/ean-services/rs/hotel/"; string version = "v3/"; string method = "info/"; string hotelId1 = "188603"; int hotelId = Convert.ToInt32(hotelId1); string otherElemntsStr = "&cid=411931&minorRev=[12]&customerUserAgent=[hotel]&locale=en_US&currencyCode=INR"; string apiKey = "tzyw4x2zspckjayrbjekb397"; string sig = "a6f828b696ae6a9f7c742b34538259b0"; string url = service + version + method + "?&type=xml" + "&apiKey=" + apiKey + "&sig=" + sig + otherElemntsStr + "&hotelId=" + hotelId; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url) as HttpWebRequest; request.Method = "POST"; request.ContentType = "text/xml"; request.ContentLength = 0; XmlDocument xmldoc = new XmlDocument(); using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { StreamReader responsereader = new StreamReader(response.GetResponseStream()); var responsedata = responsereader.ReadToEnd(); xmldoc = (XmlDocument)JsonConvert.DeserializeXmlNode(responsedata); xmldoc.Save(@"D:\FlightSearch\myfile.xml"); xmldoc.Load(@"D:\FlightSearch\myfile.xml"); DataSet ds = new DataSet(); ds.ReadXml(Request.PhysicalApplicationPath + "myfile.xml"); GridView1.DataSource = ds.Tables["HotelSummary"]; GridView1.DataBind(); }

    Read the article

  • Ensure that my C# desktop application is making requests to my ASP .NET MVC action?

    - by Mathias Lykkegaard Lorenzen
    I've seen questions that are almost identical to this one, except minor but important differences that I would like to get detailed. Let's say that I have a controller and an action method in MVC which therefore accepts requests on the following URL: http://example.com/api/myapimethod?data=some-data-here. This URL is then being called regularly by 1000 clients or more spread out in the public. The reason for this is crowdsourcing. The clients around the globe help feed a global cache on my server, which makes it faster for the rest of the clients to fetch the data. Now, if I'm sneaky (and I am), I can go into Fiddler, Ethereal, Wireshark or any other packet sniffing tool and figure out which requests the program is making. By figuring that out, I can also replicate them, and fill the service with false corrupted data. What is the best approach to ensuring that the data received in my ASP .NET MVC action method is actually from the desktop client application, and not some falsely generated data that the user invented? Since it is all based on crowdsourcing, would it be a good idea for my users to be able to "vote" if some data is falsified, and then let an automatic cleanup commence if there are enough votes? I do not have access to a tool like SmartAssembly, so unfortunately my .NET program is fully decompilable. I realize this might be impossible to accomplish in an error-proof manner, but I would like to know where my best chances are.

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • PHP Web Services - Nice try

    Thanks to the membership in the O'Reilly User Group Programme the Mauritius Software Craftsmanship Community (short: MSCC) recently received a welcome package with several book titles. Among them is the latest publication of Lorna Jane Mitchell - 'PHP Web Services: APIs for the Modern Web'. Following is the book review I put on Amazon: Nice try! Initially, I was astonished that a small book like 'PHP Web Services' would be able to cover all the interesting topics about APIs and Web Services, independently whether they are written in PHP or not. And unfortunately, the title isn't able to stand up to the readers (or at least my) expectations. Maybe as a light defense, there is no usual paragraph about the intended audience of that book, but still I have to admit that the first half (chapters 1 to 8) are well written and Lorna has her points on the various technologies. Also, the code samples in PHP are clean and easy to understand. With chapter 'Debugging Web Services' the book started to change my mind about the clarity of advice and the instructions on designing and developing good APIs. Eventually, this might be related to the fact that I'm used to other tools since years, like Telerik Fiddler as HTTP proxy in order to trace and inspect any kind of request/response handling. Including localhost monitoring, SSL certification acceptance, and the ability to debug mobile devices, especially iOS-based ones. Compared to Charles, Fiddler is available for free. What really got me off the hook is the following statement in chapter 10 about Service Type Decisions: "For users who have larger systems using technology stacks such as Java, C++, or .NET, it may be easier for them to integrate with a SOAP service." WHAT? A couple of pages earlier the author recommends to stay away from 'old-fashioned' API styles like SOAP (if possible). And on top of that I wonder why there are tons of documentation towards development of RESTful Web Services based on WebAPI. The ASP.NET stack clearly moves away from SOAP to JSON and REST since years! Honestly, as a software developer on the .NET stack this leaves a mixed feeling after all. As for the remaining chapters I simply consider them as 'blah blah' without any real value and lots of theoretical advice. Related to the chapter 13 about 'Documentation', I just had the 'pleasure' to write a C#-based client against a Java-based SOAP Web Service. Personally, I take the WSDL as the master reference in the first place and Visual Studio generates all the stub types involved in the communication. During the implementation and testing I came across a 'java.lang.NullPointerException' in various methods and for various method parameters. The WSDL and the generated types were declared as Nullable, so nothing to worry about, or? Well, I logged in a support ticket, and guess what was the response to that scenario? "The service definition in the WSDL is wrong, please refer to the documentation in order to use the methods and parameters correctly" - No comment! Lorna's title is a quick read and in some areas she has good advice on designing and implementing Web Services and APIs. But roughly 100 pages aren't enough to cover a vast topic like that. After all, nice try and I'm looking forward to an improved second edition. Honestly, I never thought that I would come across a poor review. In general, it's a good book but it clearly has a lack of depth, the PHP code samples are incomplete (closing tags missing), and there are too many assumptions and theoretical statements.

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • Best practices for logging and tracing in .NET

    - by Levidad
    I've been reading a lot about tracing and logging, trying to find some golden rule for best practices in the matter, but there isn't any. People say that good programmers produce good tracing, but put it that way and it has to come from experience. I've also read similar questions in here and through the internet and they are not really the same thing I am asking or do not have a satisfying answer, maybe because the questions lack some detail. So, folks say that tracing should sort of replicate the experience of debugging the application in cases where you can't attach a debugger. It should provide enough context so that you can see which path is taken at each control point in the application. Going deeper, you can even distinguish between tracing and event logging, in that "event logging is different from tracing in that it captures major states rather than detailed flow of control". Now, say I want to do my tracing and logging using only the standard .NET classes, those in the System.Diagnostics namespace. I figured that the TraceSource class is better for the job than the static Trace class, because I want to differentiate among the trace levels and using the TraceSource class I can pass in a parameter informing the event type, while using the Trace class I must use Trace.WriteLineIf and then verify things like SourceSwitch.TraceInformation and SourceSwitch.TraceErrors, and it doesn't even have properties like TraceVerbose or TraceStart. With all that in mind, would you consider a good practice to do as follows: Trace a "Start" event when begining a method, which should represent a single logical operation or a pipeline, along with a string representation of the parameter values passed in to the method. Trace an "Information" event when inserting an item into the database. Trace an "Information" event when taking one path or another in an important if/else statement. Trace a "Critical" or "Error" in a catch block depending on weather this is a recoverable error. Trace a "Stop" event when finishing the execution of the method. And also, please clarify when best to trace Verbose and Warning event types. If you have examples of code with nice trace/logging and are willing to share, that would be excelent. Note: I've found some good information here, but still not what I am looking for: http://msdn.microsoft.com/en-us/magazine/ff714589.aspx Thanks in advance!

    Read the article

  • Design pattern for an ASP.NET project using Entity Framework

    - by MPelletier
    I'm building a website in ASP.NET (Web Forms) on top of an engine with business rules (which basically resides in a separate DLL), connected to a database mapped with Entity Framework (in a 3rd, separate project). I designed the Engine first, which has an Entity Framework context, and then went on to work on the website, which presents various reports. I believe I made a terrible design mistake in that the website has its own context (which sounded normal at first). I present this mockup of the engine and a report page's code behind: Engine (in separate DLL): public Engine { DatabaseEntities _engineContext; public Engine() { // Connection string and procedure managed in DB layer _engineContext = DatabaseEntities.Connect(); } public ChangeSomeEntity(SomeEntity someEntity, int newValue) { //Suppose there's some validation too, non trivial stuff SomeEntity.Value = newValue; _engineContext.SaveChanges(); } } And report: public partial class MyReport : Page { Engine _engine; DatabaseEntities _webpageContext; public MyReport() { _engine = new Engine(); _databaseContext = DatabaseEntities.Connect(); } public void ChangeSomeEntityButton_Clicked(object sender, EventArgs e) { SomeEntity someEntity; //Wrong way: //Get the entity from the webpage context someEntity = _webpageContext.SomeEntities.Single(s => s.Id == SomeEntityId); //Send the entity from _webpageContext to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context //Right(?) way: //Get the entity from the engine context someEntity = _engine.GetSomeEntity(SomeEntityId); //undefined above //Send the entity from the engine's context to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context } } Because the webpage has its own context, giving the Engine an entity from a different context will cause an error. I happen to know not to do that, to only give the Engine entities from its own context. But this is a very error-prone design. I see the error of my ways now. I just don't know the right path. I'm considering: Creating the connection in the Engine and passing it off to the webpage. Always instantiate an Engine, make its context accessible from a property, sharing it. Possible problems: other conflicts? Slow? Concurrency issues if I want to expand to AJAX? Creating the connection from the webpage and passing it off to the Engine (I believe that's dependency injection?) Only talking through ID's. Creates redundancy, not always practical, sounds archaic. But at the same time, I already recuperate stuff from the page as ID's that I need to fetch anyways. What would be best compromise here for safety, ease-of-use and understanding, stability, and speed?

    Read the article

  • Exadata X3, 11.2.3.2 and Oracle Platinum Services

    - by Rene Kundersma
    Oracle recently announced an Exadata Hardware Update. The overall architecture will remain the same, however some interesting hardware refreshes are done especially for the storage server (X3-2L). Each cell will now have 1600GB of flash, this means an X3-2 full rack will have 20.3 TB of total flash ! For all the details I would like to refer to the Oracle Exadata product page: www.oracle.com/exadata Together with the announcement of the X3 generation. A new Exadata release, 11.2.3.2 is made available. New Exadata systems will be shipped with this release and existing installations can be updated to that release. As always there is a storage cell patch and a patch for the compute node, which again needs to be applied using YUM. Instructions and requirements for patching existing Exadata compute nodes to 11.2.3.2 using YUM can be found in the patch README. Depending on the release you have installed on your compute nodes the README will direct you to a particular section in MOS note 1473002.1. MOS 1473002.1 should only be followed with the instructions from the 11.2.3.2 patch README. Like with 11.2.3.1.0 and 11.2.3.1.1 instructions are added to prepare your systems to use YUM for the first time in case you are still on release 11.2.2.4.2 and earlier. You will also find these One Time Setup instructions in MOS note 1473002.1 By default compute nodes that will be updated to 11.2.3.2.0 will have the UEK kernel. Before 11.2.3.2.0 the 'compatible kernel' was used for the compute nodes. For 11.2.3.2.0 customer will have the choice to replace the UEK kernel with the Exadata standard 'compatible kernel' which is also in the ULN 11.2.3.2 channel. Recommended is to use the kernel that is installed by default. One of the other great new things 11.2.3.2 brings is Writeback Flashcache (wbfc). By default wbfc is disabled after the upgrade to 11.2.3.2. Enable wbfc after patching on the storage servers of your test environment and see the improvements this brings for your applications. Writeback FlashCache can be enabled  by dropping the existing FlashCache, stopping the cellsrv process and changing the FlashCacheMode attribute of the cell. Of course stopping cellsrv can only be done in a controlled manner. Steps: drop flashcache alter cell shutdown services cellsrv again, cellsrv can only be stopped in a controlled manner alter cell flashCacheMode = WriteBack alter cell startup services cellsrv create flashcache all Going back to WriteThrough FlashCache is also possible, but only after flushing the FlashCache: alter cell flashcache all flush Last item I like to highlight in particular is already from a while ago, but a great thing to emphasis: Oracle Platinum Services. On top of the remote fault monitoring with faster response times Oracle has included update and patch deployment services.These services are delivered by Oracle Advanced Customer Support at no additional costs for qualified Oracle Premier Support customers. References: 11.2.3.2.0 README Exadata YUM Repository Population, One-Time Setup Configuration and YUM upgrades  1473002.1 Oracle Platinum Services

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • net.tcp Listener Adapter and net.tcp Port Sharing Service not starting on reboot

    - by Peter K.
    I am using the net.tcp protocol for various web services. When I reboot my Windows 7 Ultimate (64-bit) macbook pro, the service never restarts automatically, even though that is how they are set: The only relevant events I can see are in the System Event Log: Error 6/9/2011 19:47 Service Control Manager 7001 None The Net.Tcp Listener Adapter service depends on the Net.Tcp Port Sharing Service service which failed to start because of the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7000 None The Net.Tcp Port Sharing Service service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7009 None A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect. This post suggests that it's something else blocking the port (in the post it's SCCM 2007 R3 Client which I don't use). What else could be the problem? If it's something else blocking the port, how do I figure out what? When I manually start the services, they start correctly. Dependencies are: Net.Tcp Port Sharing Service Net.Tcp Listener Adapter Still no luck, but I think the problem might be that my network connection takes too long to come up. I put in a custom view of the event log, and found these items: The first in the series says: A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect.

    Read the article

  • Import exponetial fixed width format data into Excel

    - by Tom Daniel
    I've received a bunch of text data files consiting of Lots of records (30K/file) of 3 fields each of 5-place numbers in exponential format: s0.nnnnnEsee (where s is +/-, n is a digit and ee is the exponent (always 2 digit). When I open the file in Notepad, the format is perfectly uniform throughout each file, but when I import it to Excel using Data|Import|Fixed Width, many of the data values get messed up, no matter what format (text, exponential, various custom tries) I assign to the cells. Looking at the Notepad version, it appears that leading + signs were replaced with a space in the data file, but the sign of the exponential is always there. This means that some fields begin with a space, and this appears to confuse the Excel import routine. I get the same result in Excel 2003 and 2007. I'm sure there's a straightforward solution (hopefully without a messy VBA routine), but I can't figure out what to try next. :-) To clarify (hopefully), here are some input records and the corresponding text input to Excel: Notepad Excel -0.11311E+01 0.10431E-04 0.27018E-03 -0.11311E 1.0431E-05 2.7018E-04 0.19608E+00-0.81414E-02-0.89553E-02 0.19608E -8.1414E-03 8.9553E-03 etc. Whoopee! Solved my own problem - in the spirit of Jeopardy, now that I've begun the question, here's the answer - Use a different "File Origin" - several other than the default "Unicode UTF..." work fine! What a pain. Hope this helps somebody else avoid a few unpleasant hours! Aloha from Kona, Tom

    Read the article

  • Recover data from quick formatted DVD-R

    - by Andrii Kalytiiuk
    I need to recover data from quick-formatted DVD-R. Please advise a free of charge option (cheap commercial tools will be ok either). Disk was partially recorded with Windows built in disk recorder and recording most likely was not complete. Afterwards I have inserted partially recorded DVD again and on Windows recorder's message box 'How to use this disk?' selected - 'use for CD/DVD player' and data was completely lost - as new recording session was started. Files of photos were recorded on disk. What I have tried so far: DiskInternals CD-DVD recovery - sees 5 jpg files but can't show preview. Tool is commercial - trial version does not allow to recover files. CDCheck - doesn't see any files and reports errors at attempt to scand DVD CD Recovery Toolbox Free - does not even recognize DVD drive ISO Buster - recognizes two files - one MP3 file for 99% of recorded size and one ARC file for about 100 KB MiniTool Power Data Recovery - Free Edition - does not see any files on DVD Stellar Phoenix CD DVD Data Recovery - does not see any files BinaryBiz Virtual Lab - sees DVD disk but needs license to browse content Please advise how is it possible to recover files from DVD.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >