Search Results

Search found 1221 results on 49 pages for 'contract'.

Page 19/49 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • WCF 3.5 Service and multiple http bindings

    - by mortenvpdk
    Hi I can't get my WCF service to work with more than one http binding. In IIS 7 I have to bindings http:/service and http:/service.test both at port 80 In my web.config I have added the baseAddressPrefixFilters but I can't add more than one <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://service"/> <add prefix="http://service.test"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> This gives almost the same error "This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. " as if no filers were specified at all (This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item) If I add only one filter then the service works but only responds on the added filter address. I've also tried with specifing multiple endpoints like (and only one filter): <endpoint address="http://service.test" binding="basicHttpBinding" bindingConfiguration="" contract="IService" /> <endpoint address="http://service" binding="basicHttpBinding" bindingConfiguration="" contract="IService" /> Then still only the address also specified in the filter works and the other returns this error: Server Error in Application "ISPSERVICE" HTTP Error 400.0 - Bad Request Regards Morten

    Read the article

  • search dataset from xml file

    - by Anelim
    Hi, I need to filter the results I obtain when I load my xml file. For example I need to search the xml data for items with keyword "Chemistry" for example. The below xml example is a summary of my xml file. The data is loaded in a gridview. Could you help? Thanks! Xml File (summary): <CONTRACTS> <CONTRACT> <CONTRACTID>779</CONTRACTID> <NAME>ContractName</NAME> <KEYWORDS>Chemistry, Engineering, Chemical</KEYWORDS> <CONTRACTSTARTDATE>1/8/2005</CONTRACTSTARTDATE> <CONTRACTENDDATE>31/7/2008</CONTRACTENDDATE> <COMMODITIES><COMMODITY><COMMODITYCODE>CHEM</COMMODITYCODE> <COMMODITYNAME>Chemicals</COMMODITYNAME></COMMODITY></COMMODITIES> </CONTRACT></CONTRACTS> My code behind code is: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim ds As DataSet = New DataSet() ds.ReadXml(AppDomain.CurrentDomain.BaseDirectory + "/testxml.xml") Dim dtContract As DataTable = ds.Tables(0) Dim dtJoinCommodities As DataTable = ds.Tables(1) Dim dtCommodity As DataTable = ds.Tables(2) dtContract.Columns.Add("COMMODITYCODE") dtContract.Columns.Add("COMMODITYNAME") Dim count As Integer = 0 Dim commodityCode As String = Nothing Dim commodityName As String = Nothing Dim dRowJoinCommodity As DataRow Dim trimChar As Char() = {","c, " "c} Dim textboxstring As String = "KEYWORDS like 'pencil'" For Each dRow As DataRow In dtContract.Select(textboxstring) commodityCode = "" commodityName = "" count = dtContract.Rows.IndexOf(dRow) dRowJoinCommodity = dtJoinCommodities.Rows(count) For Each dRowCommodities As DataRow In dtCommodity.Rows If dRowCommodities("COMMODITIES_Id").ToString() = dRowJoinCommodity("COMMODITIES_ID").ToString() Then commodityCode = commodityCode + dRowCommodities("COMMODITYCODE").ToString() + ", " commodityName = commodityName + dRowCommodities("COMMODITYNAME").ToString() + ", " End If Next commodityCode = commodityCode.TrimEnd(trimChar) commodityName = commodityName.TrimEnd(trimChar) dRow("COMMODITYCODE") = commodityCode dRow("COMMODITYNAME") = commodityName Next GridView1.DataSource = dtContract GridView1.DataBind() End Sub

    Read the article

  • Silverlight WCF service consuming inherited types in datacontract

    - by RemotecUk
    Hi, Im trying to consume a WCF service in silverlight... What I have done is to create two seperate assemblies for my datacontracts... Assembly that contains all of my types marked with data contracts build against .Net 3.5 A Silverlight assembly which links to files in the 1st assembly. This means my .Net app can reference assembly 1 and my silverlight app assembly 2. This works fine and I can communicate across the service. The problems occur when I try to transfer inherited classed. I have the following class stucture... IFlight - an interface for all types of flights. BaseFlight : IFlight - a baseflight flight implements IFlight AdhocFlight : BaseFlight, IFlight - an adhoc flight inherits from baseflight and also implements IFlight. I can successfully transfer base flights across the service. However I really need to be able to transfer objects of IFlight across the interface as I want one operation contract that can transfer many types of flight... public IFlight GetFlightBooking() { AdhocFlight af = new AdhocFlight(); return af; } ... should work I think? However I get the error: "The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error." Any ideas would be appreciated.

    Read the article

  • Extending timeout and message size in WCF service generated by Biztalk 2006 R2

    - by Sergej Andrejev
    Hi, I'm generating WCF service using Biztalk. The code I get is this: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="ServiceBehaviorConfiguration"> <serviceDebug httpHelpPageEnabled="true" httpsHelpPageEnabled="false" includeExceptionDetailInFaults="false" /> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="false" externalMetadataLocation="" /> </behavior> </serviceBehaviors> </behaviors> <services> <!-- Note: the service name must match the configuration name for the service implementation. --> <service name="Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance" behaviorConfiguration="ServiceBehaviorConfiguration"> <endpoint name="HttpMexEndpoint" address="mex" binding="mexHttpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <!--<endpoint name="HttpsMexEndpoint" address="mex" binding="mexHttpsBinding" bindingConfiguration="" contract="IMetadataExchange" />--> </service> </services> </system.serviceModel> Maybe it's not the most beautifull configuration, but it works. The problem is I don't know how to modify timeouts and message max size, because it has only mex endpoint. I'm surprised how this works at all with just mex endpoint. So two questions are: Why does this works at all? What should I add to extend timeouts and message size?

    Read the article

  • AspNetMembership provider with WCF service

    - by Sly
    I'm trying to configure AspNetMembershipProvider to be used for authenticating in my WCF service that is using basicHttpBinding. I have following configuration: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <bindings> <basicHttpBinding> <binding name="basicSecureBinding"> <security mode="Message"></security> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="MyApp.Services.ComputersServiceBehavior"> <serviceAuthorization roleProviderName="AspNetSqlRoleProvider" principalPermissionMode="UseAspNetRoles" /> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="MembershipProvider" membershipProviderName="AspNetSqlMembershipProvider"/> </serviceCredentials> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="MyApp.Services.ComputersServiceBehavior" name="MyApp.Services.ComputersService"> <endpoint binding="basicHttpBinding" contract="MyApp.Services.IComputersService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> Roles are enabled and membership provider is configured (its working for web site). But authentication process is not fired at all. There is no calles to data base during request, and when I try to set following attribute on method: [PrincipalPermission(SecurityAction.Demand, Authenticated = true)] public bool Test() { return true; } I'm getting access denied exception. Any thoughts how to fix it?

    Read the article

  • TCL TDom: Looping through Objects

    - by pws5068
    Using TDom, I would like to cycle through a list of objects in the following format: <object> <type>Hardware</type> <name>System Name</name> <description>Basic Description of System.</description> <attributes> <vendor>Dell</vendor> <contract>MM/DD/YY</contract> <supportExpiration>MM/DD/YY</supportExpiration> <location>Building 123</location> <serial>xxx-xxx-xxxx</serial> <mac>some-mac-address</mac> </attributes> </object> <object> <type>Software</type> <name>Second Object</name> ... Then I use TDom to make a list of objects: set dom [dom parse $xml] set doc [$dom documentElement] set nodeList [$doc selectNodes /systems/object] So far I've done this to (theoretically) select every "Object" node from the list. How can I loop through them? Is it just: foreach node $nodeList { For each object, I need to retrieve the association of each attribute. From the example, I need to remember that the "name" is "System Name", "vendor" is "Dell", etc. I'm new to TCL but in other languages I would use an object or an associative list to store these. Is this possible? Can you show me an example of the syntax to select an attribute in this manner?

    Read the article

  • Call WCF Service Through Javascript, AJAX, or JQuery

    - by obautista
    I created a number of standard WCF Services (Service Contract and Host (svc) are in separate assemblies). I fired up a Web Site in IIS to host the Services (i.e., address is http://services:1000/wcfservices.svc). Then in my Web Site project I added the reference. I am able to call the services normally. I am needed to call some of the services client side. Not sure if I should be looking at articles calling WCF services through AJAX, JQuery, or JSON enabled WCF Services. Can anyone provide any thoughts or experience with configuring as such? Some of the changes I made was adding the following to the Operation Contract: [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "SetFoo")] void SetFoo(string Id); Then this above the implementation of the interface: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] Then in the service webconfig I have this (parens are angle brackets): <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <baseAddressPrefixFilters> <add prefix="http://services:1000/wcfservices.svc/"/>> </baseAddressPrefixFilters> </serviceHostingEnvironment> <serviceHostingEnvironment multipleSiteBindingsEnabled="false" /> Then in the client side I attempted this: <asp:ScriptManagerProxy ID="ScriptManagerProxy1" runat="server"> <compositeScript> <Scripts> <asp:ScriptReference Path="http://Flixsit:1000/FlixsitWebServices.svc" /> </Scripts> </CompositeScript> </asp:ScriptManagerProxy> I am attempting to call the service like this in javascript: wcfservices.SetFoo(string Id); Nothing is working. If it is idea or a better solution to call JSON enable, JQuery, etc....I am willing to make any changes. Thanks for any suggestions/tips provided....

    Read the article

  • multi-part identifier could not be bound error

    - by vishal Shah
    Here is my query: IF OBJECT_ID('NPWAS1513.dbo.usp_MSPEX_QLK_Billing_Fact_Load') IS NOT NULL DROP PROCEDURE dbo.usp_MSPEX_QLK_Billing_Fact_Load; GO CREATE PROCEDURE usp_MSPEX_QLK_Billing_Fact_Load @create_timestamp datetime, @update_timestamp datetime, @create_user varchar(50), @update_user varchar(50), @dbProdServ varchar(50) AS print 'dbProdServ is:'+ @dbProdServ; print 'current_user is:' +@current_user; DECLARE @sSQL AS VARCHAR(MAX); SET @sSQL = '' SET @sSQL = 'set identity_insert ' + @dbProdServ + '.mspex_qlk_billing_fact ON' EXEC(@sSQL); SET @sSQL = 'INSERT INTO ' + @dbProdServ +'.mspex_qlk_billing_fact (project_id, billing_year, billing_month, billing_month_desc, billing_date_id, projected_bill_amount, billed_year_to_date_amount, billed_inception_to_date_amount, remaining_bill_amount, actual_billed_amount, current_billing_percent, previous_billing_percent, billing_pct_diff, billing_type, final_bill_ind, last_in_progress_date, current_record_ind, load_time_stamp, total_contract_period, contract_period_current_year, partial_bill, create_timestamp, create_user, update_timestamp, update_user) SELECT project_dim.project_id, billing_final_data.billingyear, billing_final_data.billingmonth, billing_final_data.billingmonthdesc, Time_Dim.Time_ID, billing_final_data.projected_bill_amount, billing_final_data.billed_year_todate_amount, billing_final_data.billed_inception_todate_amount, billing_final_data.remaining_bill_amount, billing_final_data.actual_billed_amount, billing_final_data.current_billing_percent, billing_final_data.previous_billing_percent, billing_final_data.billing_pct_diff, billing_final_data.billing_type, billing_final_data.final_bill_ind, billing_final_data.last_in_progress_date, billing_final_data.current_record_ind, billing_final_data.load_time_stamp, billing_final_data.[Total Contract Period], billing_final_data.[Contract Period Current Year], billing_final_data.[Partial Bill],'''+ CAST(@create_timestamp as varchar(max)) + ''',''' + @create_user + ''','''+ CAST(@update_timestamp as varchar(50)) +''','''+ @update_user + ''' FROM '+ @dbProdServ +'.mspex_qlk_project_dim project_dim,'+ @dbProdServ +'.mspex_rpt_billing_final_data billing_final_data,'+ @dbProdServ + '.MSPEX_QLK_Time_Dim Time_Dim WHERE project_dim.myproject_project_uid = billing_final_data.projectuid AND'''+ convert(datetime, cast(billing_final_data.[BillingMonth] as nvarchar(2)) + '''/01/''' + cast(billing_final_data.[billingyear] as nvarchar(4)), 101) +''' + = Time_Dim.Time_Date'; BEGIN TRANSACTION EXEC(@sSQL) COMMIT TRANSACTION I get the error msg: Msg 4104, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 23 The multi-part identifier "mspex_rpt_billing_final_data.BillingMonth" could not be bound. Msg 4104, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 23 The multi-part identifier "billing_final_data.billingyear" could not be bound. Msg 207, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 83 Invalid column name 'BillingMonth'. Msg 207, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 84 Invalid column name 'billingyear'. I checked the column names, etc. and things are fine. In fact, I directly dragged the table name and column name to ensure that it is correct. Please help ASAP. Calling me at cell at 630-338-9427 would be great. But an URGENT response is absolutely necessary. Thanks guys.

    Read the article

  • DB2 Child Table Not Working - Create Table

    - by gamerzfuse
    I have a bit of a task before me. (DB2 Database) I need to create a table that will be a child table (is that what it is called in SQL?) I need it so that it has a foreign key constraint with my other table, so when the parent table is modified (record deleted) the child table also loses that record. Once I have the table, I also need to populate it with the data from the other table (if there is an easy way to UPDATE this). If you could point me in the right direction, this would help alot, as I do not even know what syntax to look for. Thanks in advance The table I have in place: create table titleauthors ( au_id char(11), title_id char(6), au_ord integer, royaltyshare decimal(5,2)); The table I am creating: create table titles ( title_id char(6), title varchar(80), type varchar(12), pub_id char(4), price decimal(9,2), advance decimal(9,2), ytd_sales integer, contract integer, notes varchar(200), pubdate date); I need the title_id to be matched with the title_id from the parent table AND use the ON DELETE CASCADE syntax to delete when that table is deleted from. My Attempt: CREATE TABLE BookTitles ( title_id char(6) NOT NULL CONSTRAINT BookTitles_title_id_pk REFERENCES titleauthors(title_id) ON DELETE CASCADE, title varchar(80) NOT NULL, type varchar(12), pub_id char(4), price decimal(9,2), advance decimal(9,2), ytd_sales integer, contract integer, notes varchar(200), pubdate date) ; Thanks in advance!

    Read the article

  • A moral dilemma - What job to go for?

    - by StefanE
    Here is the story: I have accepted an offer from a gaming company to work as an senior test engineer / developer. I have not yet received an signed copy of the contract. I will get a bit less salary then I asked for and it is as well less than I have today. The company have booked flight tickets for my move over there. Now comes the problem. I did an telephone interview with a company last week and they have asked me for an in person interview and are willing to pay for flights for the meeting. This company is my first choice(and have been for a few years) and would also benefit my career and I believe I will enjoy working there more. What should I do here.. I do feel uncomfortable giving a last minute rejection when I have over the phone accepted the offer, but on the other hand they have yet produced a signed contract and as well paying me a bit less than I think I'm worth. The business is small in many ways and I don't want to end up with a bad reputation. Would be great to hear your opinions!

    Read the article

  • Extension methods for encapsulation and reusability

    - by tzaman
    In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself?

    Read the article

  • Publishing a WCF Server and client and their endpoints

    - by Ahmadreza
    Imagine developing a WCF solution with two projects (WCF Service/ and web application as WCF Client). As long as I'm developing these two projects in visual studio and referencing service to client (Web Application) as server reference there is no problem. Visual studio automatically assign a port for WCF server and configure all needed configuration including Server And Client binging to something like this in server: <service behaviorConfiguration="DefaultServiceBehavior" name="MYWCFProject.MyService"> <endpoint address="" binding="wsHttpBinding" contract="MYWCFProject.IMyService"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8731/MyService.svc" /> </baseAddresses> </host> </service> and in client: <client> <endpoint address="http://localhost:8731/MyService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" contract="MyWCFProject.IMyService" name="WSHttpBinding_IMyService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> The problem is I want to frequently publish this two project in two different servers as my production servers and Service url will be "http://mywcfdomain/MyService.svc". I don't want to change config file every time I publish my server project. The question is: is there any feature in Visual Studio 2008 to automatically change the URLs or I have to define two different endpoints and I set them within my code (based on a parameter in my configuration for example Development/Published).

    Read the article

  • How can I track the last location of a shipment effeciently using latest date of reporting?

    - by hash
    I need to find the latest location of each cargo item in a consignment. We mostly do this by looking at the route selected for a consignment and then finding the latest (max) time entered against nodes of this route. For example if a route has 5 nodes and we have entered timings against first 3 nodes, then the latest timing (max time) will tell us its location among the 3 nodes. I am really stuck on this query regarding performance issues. Even on few hundred rows, it takes more than 2 minutes. Please suggest how can I improve this query or any alternative approach I should acquire? Note: ATA= Actual Time of Arrival and ATD = Actual Time of Departure SELECT DISTINCT(c.id) as cid,c.ref as cons_ref , c.Name, c.CustRef FROM consignments c INNER JOIN routes r ON c.Route = r.ID INNER JOIN routes_nodes rn ON rn.Route = r.ID INNER JOIN cargo_timing ct ON c.ID=ct.ConsignmentID INNER JOIN (SELECT t.ConsignmentID, Max(t.firstata) as MaxDate FROM cargo_timing t GROUP BY t.ConsignmentID ) as TMax ON TMax.MaxDate=ct.firstata AND TMax.ConsignmentID=c.ID INNER JOIN nodes an ON ct.routenodeid = an.ID INNER JOIN contract cor ON cor.ID = c.Contract WHERE c.Type = 'Road' AND ( c.ATD = 0 AND c.ATA != 0 ) AND (cor.contract_reference in ('Generic','BP001','020-543-912')) ORDER BY c.ref ASC

    Read the article

  • A commercial software but open and free for personal/edu. How to license?

    - by Ivan
    I am developing a software to sell for business use but am willing to make it free and open-source for personal and educational use. Actually I can see the flowing requirements I would like the license to set: Personal and educational usage of the program and its source codes is to be free. In case of publishing of derivative works the original work and author (me) must be mentioned (incl. textual link to my website in a not-very-far-hidden place) and the derivative work must have different name. A derivative work can be closed-source. In every case of commercial (when the end-user is a commercial body (as a company (expect of non-profit organizations), an individual entrepreneur or government office)) usage of my work or any of derivative works made by anyone, the end-user, service provider or the derivative author must buy a commercial license from me. I mean no guarantees or responsibilities, whether expressed or implied... (except the case when one explicitly purchases a support service contract from me and the particular contract specifies a responsibility). Is there a known common license for this case? As far as I can see now it can not be OSI-approved as it does not comply to the §6. of OSI definition of open source. But there still can be an a common known reusable license for this case as it looks quite natural, I think.

    Read the article

  • Convert XML to TCL Object

    - by pws5068
    Greetings, I'm new to TCL scripting, and I have a very very basic xml file which I need to import information from into tcl. Example of XML Document Structure: <object> <type>Hardware</type> <name>System Name</name> <description>Basic Description of System.</description> <attributes> <vendor>Dell</vendor> <contract>MM/DD/YY</contract> <supportExpiration>MM/DD/YY</supportExpiration> <location>Building 123</location> <serial>xxx-xxx-xxxx</serial> <mac>some-mac-address</mac> </attributes> </object> Etc... I've seen something called TCLXML but I'm not sure if this is the best route or even how to create the package to use it.. Any help will be greatly appreciated.

    Read the article

  • How to specify allowed exceptions in WCF's configuration file?

    - by tucaz
    Hello! I´m building a set of WCF services for internal use through all our applications. For exception handling I created a default fault class so I can return treated message to the caller if its the case or a generic one when I have no clue what happened. Fault contract: [DataContract(Name = "DefaultFault", Namespace = "http://fnac.com.br/api/2010/03")] public class DefaultFault { public DefaultFault(DefaultFaultItem[] items) { if (items == null || items.Length== 0) { throw new ArgumentNullException("items"); } StringBuilder sbItems = new StringBuilder(); for (int i = 0; i Specifying that my method can throw this exception so the consuming client will be aware of it: [OperationContract(Name = "PlaceOrder")] [FaultContract(typeof(DefaultFault))] [WebInvoke(UriTemplate = "/orders", BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json, Method = "POST")] string PlaceOrder(Order newOrder); Most of time we will use just .NET to .NET communication with usual binds and everything works fine since we are talking the same language. However, as you can see in the service contract declaration I have a WebInvoke attribute (and a webHttp binding) in order to be able to also talk JSON since one of our apps will be built for iPhone and this guy will talk JSON. My problem is that whenever I throw a FaultException and have includeExceptionDetails="false" in the config file the calling client will get a generic HTTP error instead of my custom message. I understand that this is the correct behavior when includeExceptionDetails is turned off, but I think I saw some configuration a long time ago to allow some exceptions/faults to pass through the service boundaries. Is there such thing like this? If not, what do u suggest for my case? Thanks a LOT!

    Read the article

  • My project is no longer used - how should I feel?

    - by flybywire
    For the last two years I have been developing and supporting an important project for a big customer. The project included mining data from the customer's existing systems, processing, and displaying and updating in the customer's public home page. The project was defined as crucial by the customer and I was payed good money and flown at the customer's expense to meet key employees. Some months ago, when the project was finished and in maintainance mode, I informed the customer that I am no longer interested in doing it as I had a new opportunity that would not be compatible with my existing customer. I was payed to train one of their employees, flown to meet him, make sure everything works and that he can be safely left in charge of the project. We finished in good terms after I complied with all my obligations and they payed me all they owed me. Some days ago, just out of curiosity, I entered to their website to see how the data continues to be updated and much to my dismay I discovered that the day after my contract was finished my system was "turned off" and it ceased to feed data to the public website. Let's put it clear, there is no issue of money or broken contract here. They are in they full right to do whatever they want with my software. But it is an issue of broken "programmer's ego". Should I feel bad about it (I do). Should I care and check out with my customer if they need some help? Or is it none of my matters?

    Read the article

  • Why are we getting a WCF "Framing error" on some machines but not others

    - by Ian Ringrose
    We have just found we are getting “framing errors” (as reported by the WCF logs) when running our system on some customer test machine. It all works ok on our development machines. We have an abstract base class, with KnownType attributes for all its sub classes. One of it’s subclass is missing it’s DataContract attribute. However it all worked on our test machine! On the customers test machine, we got “framing error” showing up the WCF logs, this is not the error message I have seen in the past when missing a DataContract attribute, or a KnownType attribute. I wish to get to the bottom of this, as we can no longer have confidence in our ability to test the system before giving it to the customer until we can make our machines behave the some as the customer’s machines. Code that try to show what I am talking about, (not the real code) [DataContract()] [KnownType(typeof(SubClass1))] [KnownType(typeof(SubClass2))] // other subclasses with data members public abstract class Base { [DataMember] public int LotsMoreItemsThenThisInRealLife; } /// <summary> /// This works on some machines (not not others) when passed to Contract::DoIt, /// note the missing [DataContract()] /// </summary> public class SubClass1 : Base { // has no data members } /// <summary> /// This works in all cases when passed to Contract::DoIt /// </summary> [DataContract()] public class SubClass2 : Base { // has no data members } public interface IContract { void DoIt(Base[] items); } public static class MyProgram { public static IContract ConntectToServerOverWCF() { // lots of code ... return null; } public static void Startup() { IContract server = ConntectToServerOverWCF(); // this works all of the time server.DoIt(new Base[]{new SubClass2(){LotsMoreItemsThenThisInRealLife=2}}); // this works "in develperment" e.g. on our machines, but not on the customer's test machines! server.DoIt(new Base[] { new SubClass1() { LotsMoreItemsThenThisInRealLife = 2 } }); } }

    Read the article

  • Consuming a WCF Service

    - by Lijo
    Hi I created a WCF service which is hosted in windows service. I created a proxy using svcutil “svcutil.exe http://localhost:8000/ServiceModelSamples/FreeServiceWorld?wsdl” It generated an output.config file and proxy class. The output.config has the following element <client> <endpoint address="http://localhost:8000/ServiceModelSamples/FreeServiceWorld" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IWeather" contract="IWeather" name="WSHttpBinding_IWeather"> <identity> <servicePrincipalName value="host/D471DTRV.ustr.com" /> </identity> </endpoint> </client> I created a website (as client) and added a new C# file (MyFile.cs) into it. I copied the contents of the proxy class into MyFile.cs. [The output.config is not copied to the web site] In the code behnid of aspx, I am using the following code WeatherClient client= new WeatherClient("WSHttpBinding_IWeather"); It throws an exception as “Could not find endpoint element with name 'WSHttpBinding_IWeather' and contract 'IWeather' in the ServiceModel client configuration section.” Could you please help me to understand the missing link here? Thanks Lijo

    Read the article

  • Windows service (hosting WCF service) stops immediately on start up

    - by Thr33Dii
    My Question: I cannot navigate to the base address once the service is installed because the service won't remain running (stops immediately). Is there anything I need to do on the server or my machine to make the baseAddress valid? Background: I'm trying to learn how to use WCF services hosted in Windows Services. I have read several tutorials on how to accomplish this and it seems very straight forward. I've looked at this MSDN article and built it step-by-step. I can install the service on my machine and on a server, but when I start the service, it stops immediately. I then found this tutorial, which is essentially the same thing, but it contains some clients that consume the WCF service. I downloaded the source code, compiled, installed, but when I started the service, it stopped immediately. Searching SO, I found a possible solution that said to define the baseAddress when instantiating the ServiceHost, but that didnt help either. My serviceHost is defined as: serviceHost = new ServiceHost( typeof( CalculatorService ), new Uri( "http://localhost:8000/ServiceModelSamples/service" ) ); My service name, base address, and endpoint: <service name="Microsoft.ServiceModel.Samples.CalculatorService" behaviorConfiguration="CalculatorServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8000/ServiceModelSamples/service"/> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> I've verified the namespaces are identical. It's just getting frustrating that the tutorials seem to assume that the Windows service will start as long as all the stated steps are followed. I'm missing something and it's probably right in front of me. Please help!

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Solaris10 x86 mirror. Making second disk booteable when failure

    - by Kani
    Did a mirror (RAID1) with Solaris 10 in x86. Everything OK. Now, I´m trying to make the second disk booteable, this is: from grub or in case of failure of disk1. I edited /boot/grub/menu.lst: #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris 10 9/10 s10x_u9wos_14a X86 findroot (rootfs1,0,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot -s module /boot/amd64/x86.miniroot-safe #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe #---------------------END BOOTADM-------------------- #Make second disk booteable!!!!!!! title alternate boot findroot (rootfs1,1,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe But is not working. In the BIOS, when I select "alternate boot" I get: Error 15: 15 file not found also, how to configure to GRUB to make the disk2 to boot in case of error in disk1? Additionally, I did (but not related to GRUB): eeprom altbootpath=/devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a Here is the output of some commands that may help you: /sbin/biosdev 0x80 /pci@0,0/pci108e,5352@1f,2/disk@0,0 0x81 /pci@0,0/pci108e,5352@1f,2/disk@1,0 ls -l /dev/dsk/c1t?d0s0 lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t0d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@0,0:a lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t1d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a more /boot/solaris/bootenv.rc setprop ata-dma-enabled '1' setprop atapi-cd-dma-enabled '0' setprop ttyb-rts-dtr-off 'false' setprop ttyb-ignore-cd 'true' setprop ttya-rts-dtr-off 'false' setprop ttya-ignore-cd 'true' setprop ttyb-mode '9600,8,n,1,-' setprop ttya-mode '9600,8,n,1,-' setprop lba-access-ok '1' setprop prealloc-chunk-size '0x2000' setprop bootpath '/pci@0,0/pci108e,5352@1f,2/disk@0,0:a' setprop keyboard-layout 'US-English' setprop console 'text' setprop altbootpath '/pci@0,0/pci108e,5352@1f,2/disk@1,0:a' cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - #/dev/dsk/c1t0d0s1 - - swap - no - /dev/md/dsk/d1 - - swap - no - /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - df -h Filesystem size used avail capacity Mounted on /dev/md/dsk/d0 909G 11G 889G 2% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 14G 972K 14G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 909G 11G 889G 2% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 14G 40K 14G 1% /tmp swap 14G 28K 14G 1% /var/run

    Read the article

  • Will Yosemite Server backup 8.1 work on Windows Server 2008 R2 to backup Exchange 2010?

    - by best
    I've inherited the backup support of some Windows server 2003 machines that used Yosemite Server backup 8.1 The company has joined the BizSpark program and want to use the license of Windows Server 2008 R2 and Exchange 2010. I've tried to email barracuda who took over Yosemite to ask this but with no success. (do you need a support contract?) I don't have a spare machine or even space for a VM to test it on, does anybody know it it will work?

    Read the article

  • Run Rails server on client's local machine for single tenancy [on hold]

    - by rigyt
    We are building a Rails application for a client that wants to run the app locally to be close to their data center for best performance. I had assumed we would use a standard web host. What requirements should we demand of the client's infrastructure..all I can think of so far is remote access for developers and a Linux machine? We don't have server admin expertise so would need to contract this, I doubt the client has the expertise. Does this sound like a recipe for disastor?!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >