Search Results

Search found 65196 results on 2608 pages for 'add service reference'.

Page 141/2608 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • xen hotplug can't add vif to eth0: operation not supported

    - by John Tate
    I am new to xen and I am trying to get a domU running but I am having problems with it. I think my network card might not support bridging which is bizarre. This is the error I get when trying to create the domU [root@hyrba ~]# xm create sardis.secusrvr.com.cfg Using config file "/etc/xen/sardis.secusrvr.com.cfg". Error: Device 0 (vif) could not be connected. Hotplug scripts not working. All the xen kernel modules are loaded... xen_pciback 52948 0 xen_gntalloc 6807 0 xen_acpi_processor 5390 1 xen_netback 27155 0 [permanent] xen_blkback 21827 0 [permanent] xen_gntdev 10849 1 xen_evtchn 5215 1 xenfs 3326 1 xen_privcmd 4854 16 xenfs I get this error in /var/log/xen/xen-hotplug.log RTNETLINK answers: Operation not supported can't add vif2.0 to bridge eth0: Operation not supported can't add vif2.0-emu to bridge eth0: Operation not supported

    Read the article

  • Can you add preexisting storage pools in server 2012

    - by Justin
    I have been looking at Windows Server 2012's storage pools and it looks like an ideal solution for my home media center. One thing I couldn't find information on is adding a preexisting pool to a fresh server install. I ask this given the following situation: You install Windows Server 2012 and setup your storage pools You add disks over time to your pool A year later your drive with the operating system fails You replace the bad drive and reinstall server 2012. Now how do you add this preexisting storage pool full of data to your fresh install?

    Read the article

  • Web service for checking out / leasing a token

    - by JP Slavinsky
    I run a web site on AWS that has a number of web servers (say 4 of them) running behind a load balancer. For this particular web site, I have one license key of New Relic for doing instrumentation. At any one time, I only want one of the 4 web servers to be using the key. If that server goes offline, I want one of the remaining web servers to be able to begin using the license key. Does anyone know of a service that would let me manage this process? The service would not particularly need to store the key itself but rather just manage the fact that only one web server can lease out the right to use the key at any time. Something where the web servers would have to come back every few minutes and renew their lease, and if they don't it becomes available to someone else. I just realized I could maybe accomplish a hacked version of this using a file on S3, but that doesn't prevent race conditions / etc and is definitely hackish. Any thoughts welcome. FWIW, this site is built on Ruby on Rails. Thanks! JP

    Read the article

  • Manually Add the TortoiseSVN Registry information?

    - by Pete Michaud
    I recently installed TortoiseSVN on my Windows 7 64 bit computer. For reasons outside the scope of this question, the installer could not get appropriate permissions to add the keys that TSVN needs in the registry. I'd like to add those keys manually, with a reg file. I tried unzipping the .msi installer to see if the .reg file was there, but no luck. I looked around the net a little, but no luck. I looked in the source code, figuring there must be a file in there somewhere with a list of all the registry changes in one place, but I haven't found any such thing. How can I get a complete list of registry changes for a fresh TortoiseSVN installation?

    Read the article

  • Cant Add Columns to a AD Task pad except for the top level of the domain

    - by Darktux
    We are working on Active Directory taskpads application for user management in our organization and facing stange issue. When we create a taskpad, and when we are at top level of the domain, i can click view - Add/Remove Columns and add "Pre Windows Name" (and lots of other properties) to the taskpad as columns, but when i just go 1 level down , i can only see "Operating System" and "Service Pack" ; why is it happening , isnt "Domain Admins" supposed to god access to all the things in AD domain , atleast of objects they own? It is important to have "Pre Windows 2000" Name as a column begause with out that our "Shell Command" task wont show up in taskpads, since its bound to parameter "Col<9" (which is pre qindows name). Please do let me know if any additions questions to clarify my problem.

    Read the article

  • How to start and stop a systemd unit with another?

    - by Andy Shinn
    I am using CoreOS to schedule systemd units with fleet. I have two units (firehose.service and firehose-announce.service. I am trying to get the firehose-announce.service to start and stop along with the firehose.service. Here is the unit file for firehose-announce.service: [Unit] Description=Firehose etcd announcer BindsTo=firehose@%i.service After=firehose@%i.service Requires=firehose@%i.service [Service] EnvironmentFile=/etc/environment TimeoutStartSec=30s ExecStartPre=/bin/sh -c 'sleep 1' ExecStart=/bin/sh -c "port=$(docker inspect -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' firehose-%i); echo -n \"Adding socket $COREOS_PRIVATE_IPV4:$port/tcp to /firehose/upstream/firehose-%i\"; while netstat -lnt | grep :$port >/dev/null; do etcdctl set /firehose/upstream/firehose-%i $COREOS_PRIVATE_IPV4:$port --ttl 300 >/dev/null; sleep 200; done" RestartSec=30s Restart=on-failure [X-Fleet] X-ConditionMachineOf=firehose@%i.service I am trying to use BindsTo with the notion that start and stop of firehose.service will also start or stop firehose-announce.service. But this never happens correctly. If firehose.service is stopped, then firehose-announce.service goes to failed state. But when I start firehose.service, the firehose-announce.service doesn't start up. What am I doing wrong here?

    Read the article

  • Service redirection on same network

    - by Unode
    I have a network on which I run multiple servers each dedicated to a given service. Because most services run on distinct ports I'm currently looking for a way of unifying "all" services into a single "proxy" machine. The idea is to abstract which machine is being accessed but still allow direct connection if needed/requested. This "proxy" machine has only one network interface which is part of the same network as all the other service providing machines. I've looked into Routing and NAT but I've so far failed to figure out how to make it work. I tried to achieve this using shorewall but couldn't find clear examples. However I'm not entirely sure this is the best/simplest strategy. With that said, what would be the best way of achieving this result? Example case: Proxy IP - Listening port - Send requests to 192.168.0.50 80 192.168.0.1:80 " 22 192.168.0.2:2222 " 3306 192.168.0.3:3000 " 5432 192.168.0.4:5432 " 5222 192.168.0.5:5222 PS: I'm not concerned with the single-point-of-failure nature of the proxy. Thanks

    Read the article

  • IIS_IUSRS cannot access files uploaded and created by Network Service - error 401.3

    - by Max
    Let me rephrase my question as I investigated further: The problem: I have a php script that is used to upload images on my windows webserver 2008. The files are created in the correct directory. The are created and owned by the user Network Service. Network Service has full access to the uploaded file. As soon as I try to access the uploaded file (mostly an image) via HTTP, I get an 401.3 not authorized error. Now, if I right-click on the not accessible image and grant IIS_IUSRS group read permissions via the security tab, the image can be accessed! By default IIS_IUSRS has NO access at all for the uploaded file. The directory containing the image files has the correct access rights set. But each file that is new uploaded to the directory is permitted for IIS_IUSRS. The question: How can I grant IIS_IUSRS by default access to the newly uploaded file? The appPool of the website has its identity set to its default, I also tried setting it to "networkIdentity" or so, but that did not work either.

    Read the article

  • WCF net.tcp windows service - call duration and calls outstanding increases over time

    - by Brook
    I have a windows service which uses the ServiceHost class to host a WCF Service using the net.tcp binding. I have done some tweaking to the config to throttle sessions as well as number of connections, but it seems that every once in a while my "Calls outstanding" and "Call duration" shoot up and stay up in perfmon. It seems to me I have a leak somewhere, but the code I have is all fairly minimal, I'm relying on ServiceHost to handle the details. Here's how I start my service ServiceHost host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); My Faulted event just does the following (more or less, logging etc removed) if (host.State == CommunicationState.Faulted) { host.Abort(); } else { host.Close(); } host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); Here's some snippets from my app.config to show some of the things I've tried <runtime> <gcConcurrent enabled="true" /> <generatePublisherEvidence enabled="false" /> </runtime> ......... <behaviors> <serviceBehaviors> <behavior name="Throttled"> <serviceThrottling maxConcurrentCalls="300" maxConcurrentSessions="300" maxConcurrentInstances="300" /> .......... <services> <service name="MyService" behaviorConfiguration="Throttled"> <endpoint address="net.tcp://localhost:49001/MyService" binding="netTcpBinding" bindingConfiguration="Tcp" contract="IMyService"> </endpoint> </service> </services> .......... <netTcpBinding> <binding name="Tcp" openTimeout="00:00:10" closeTimeout="00:00:10" portSharingEnabled="true" receiveTimeout="00:5:00" sendTimeout="00:5:00" hostNameComparisonMode="WeakWildcard" listenBacklog="1000" maxConnections="1000"> <reliableSession enabled="false"/> <security mode="None"/> </binding> </netTcpBinding> .......... <!--for my diagnostics--> <diagnostics performanceCounters="ServiceOnly" wmiProviderEnabled="true" /> There's obviously some resource getting tied up, but I thought I covered everything with my config. I'm only getting about ~150 clients so I don't think I'm coming up against my "300" limit. "Calls per second" stays constant at anywhere from 2-5 calls per second. The service will run for hours and hours with 0-2 "calls outstanding" and very low "call duration" and then eventually it will shoot up to 30 calls oustanding and 20s call duration. Any tips on what might be causing my "calls outstanding" and "call duration" to spike? Where am I leaking? Point me in the right direction?

    Read the article

  • Object hierarchy returned by WCF Service is different than expected

    - by robalot
    Good Day Everyone... My understanding may be wrong, but I thought once you applied the correct attributes the DataContractSerializer would render fully-qualified instances back to the caller. The code runs and the objects return. But oddly enough, once I look at the returned objects I noticed the namespacing disappeared and the object-hierarchy being exposed through the (web applications) service reference seems to become "flat" (somehow). Now, I expect this from a web-service…but not through WFC. Of course, my understanding of what WFC can do may be wrong. ...please keep in mind I'm still experimenting with all this. So my questions are… Q: Can I do something within the WFC Service to force the namespacing to render through the (service reference) data client proxy? Q: Or perhaps, am I (merely) consuming the service incorrectly? Q: Is this even possible? The service code looks like… [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)] public class DataService : IFishData { public C1FE GetC1FE(Int32 key) { //… more stuff here … } public Project GetProject(Int32 key) { //… more stuff here … } } [ServiceContract] [ServiceKnownType(typeof(wcfFISH.StateManagement.C1FE.New))] [ServiceKnownType(typeof(wcfFISH.StateManagement.Project.New))] public interface IFishData { [OperationContract] C1FE GetC1FE(Int32 key); [OperationContract] Project GetProject(Int32 key); } [DataContract] [KnownType(typeof(wcfFISH.StateManagement.ObjectState))] public class Project { [DataMember] public wcfFISH.StateManagement.ObjectState ObjectState { get; set; } //… more stuff here … } [DataContract] KnownType(typeof(wcfFISH.StateManagement.ObjectState))] public class C1FE { [DataMember] public wcfFISH.StateManagement.ObjectState ObjectState { get; set; } //… more stuff here … } [DataContract(Namespace = "wcfFISH.StateManagement")] [KnownType(typeof(wcfFISH.StateManagement.C1FE.New))] [KnownType(typeof(wcfFISH.StateManagement.Project.New))] public abstract class ObjectState { //… more stuff here … } [DataContract(Namespace = "wcfFISH.StateManagement.C1FE", Name="New")] [KnownType(typeof(wcfFISH.StateManagement.ObjectState))] public class New : ObjectState { //… more stuff here … } [DataContract(Namespace = "wcfFISH.StateManagement.Project", Name = "New")] [KnownType(typeof(wcfFISH.StateManagement.ObjectState))] public class New : ObjectState { //… more stuff here … } The web application code looks like… public partial class Fish_Invite : BaseForm { protected void btnTest_Click(object sender, EventArgs e) { Project project = new Project(); project.Get(base.ProjectKey, base.AsOf); mappers.Project mapProject = new mappers.Project(); srFish.Project fishProject = new srFish.Project(); srFish.FishDataClient fishService = new srFish.FishDataClient(); mapProject.MapTo(project, fishProject); fishProject = fishService.AddProject(fishProject, IUser.UserName); project = null; } } In case I’m not being clear… The issue arises as there is a difference in (the name spacing) that I expect to see (returned) is different from what is actually returned. fishProject.ObjectState should look like... srFish.StateManagement.Project.New fishC1FE.ObjectState should look like... srFish.StateManagement.C1FE.New fishProject.ObjectState looks like... srFish.New1 fishC1FE.ObjectState looks like... srFish.New …“Help me Obi-Wan Kenobi, you’re my only hope!”

    Read the article

  • jQuery AutoComplete (jQuery UI 1.8rc3) with ASP.NET web service

    - by user296640
    Currently, I have this version of the autocomplete control working when returning XML from a .ashx handler. The xml looks like this: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <States> <State> <Code>CA</Code> <Name>California</Name> </State> <State> <Code>NC</Code> <Name>North Carolina</Name> </State> <State> <Code>SC</Code> <Name>South Carolina</Name> </State> The autocomplete code looks like this: $('.autocompleteTest').autocomplete( { source: function(request, response) { var list = []; $.ajax({ url: "http://commonservices.qa.kirkland.com/StateLookup.ashx", dataType: "xml", async: false, data: request, success: function(xmlResponse) { list = $("State", xmlResponse).map(function() { return { value: $("Code", this).text(), label: $("Name", this).text() }; }).get(); } }); response(list); }, focus: function(event, ui) { $('.autocompleteTest').val(ui.item.label); return false; }, select: function(event, ui) { $('.autocompleteTest').val(ui.item.label); $('.autocompleteValue').val(ui.item.value); return false; } }); For various reasons, I'd rather be calling an ASP.NET web service, but I can't get it to work. To change over to the service (I'm doing a local service to keep it simple), the start of the autocomplete code is: $('.autocompleteTest').autocomplete( { source: function(request, response) { var list = []; $.ajax({ url: "/Services/GeneralLookup.asmx/StateList", dataType: "xml", This code is on a page at the root of the site and the GeneralLookup.asmx is in a subfolder named Services. But a breakpoint in the web service never gets hit, and no autocomplete list is generated. In case it makes a difference, the XML that comes from the asmx is: <?xml version="1.0" encoding="utf-8" ?> <string xmlns="http://www.kirkland.com/"><State> <Code>CA</Code> <Name>California</Name> </State> <State> <Code>NC</Code> <Name>North Carolina</Name> </State> <State> <Code>SC</Code> <Name>South Carolina</Name> </State></string> Functionally equivalent since I never use the name of the root node in the mapping code. I haven't seen anything in the jQuery docs about calling a .asmx service from this control, but a .ajax call is a .ajax call, right? I've tried various different paths to the .asmx (~/Services/), and I've even moved the service to be in the same path to eliminate these issues. No luck with either. Any ideas?

    Read the article

  • Java: how to avoid circual references when dumping object information with reflection?

    - by Tom
    I've modified an object dumping method to avoid circual references causing a StackOverflow error. This is what I ended up with: //returns all fields of the given object in a string public static String dumpFields(Object o, int callCount, ArrayList excludeList) { //add this object to the exclude list to avoid circual references in the future if (excludeList == null) excludeList = new ArrayList(); excludeList.add(o); callCount++; StringBuffer tabs = new StringBuffer(); for (int k = 0; k < callCount; k++) { tabs.append("\t"); } StringBuffer buffer = new StringBuffer(); Class oClass = o.getClass(); if (oClass.isArray()) { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("["); for (int i = 0; i < Array.getLength(o); i++) { if (i < 0) buffer.append(","); Object value = Array.get(o, i); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if (value.getClass().isPrimitive() || value.getClass() == java.lang.Long.class || value.getClass() == java.lang.String.class || value.getClass() == java.lang.Integer.class || value.getClass() == java.lang.Boolean.class) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } buffer.append(tabs.toString()); buffer.append("]\n"); } else { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("{\n"); while (oClass != null) { Field[] fields = oClass.getDeclaredFields(); for (int i = 0; i < fields.length; i++) { if (fields[i] == null) continue; buffer.append(tabs.toString()); fields[i].setAccessible(true); buffer.append(fields[i].getName()); buffer.append("="); try { Object value = fields[i].get(o); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if ((value.getClass().isPrimitive()) || (value.getClass() == java.lang.Long.class) || (value.getClass() == java.lang.String.class) || (value.getClass() == java.lang.Integer.class) || (value.getClass() == java.lang.Boolean.class)) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } catch (IllegalAccessException e) { System.out.println("IllegalAccessException: " + e.getMessage()); } buffer.append("\n"); } oClass = oClass.getSuperclass(); } buffer.append(tabs.toString()); buffer.append("}\n"); } return buffer.toString(); } The method is initially called like this: System.out.println(dumpFields(obj, 0, null); So, basically I added an excludeList which contains all the previousely checked objects. Now, if an object contains another object and that object links back to the original object, it should not follow that object further down the chain. However, my logic seems to have a flaw as I still get stuck in an infinite loop. Does anyone know why this is happening?

    Read the article

  • Referencing variables in a structure / C++

    - by user1628622
    Below, I provided a minimal example of code I created. I managed to get this code working, but I'm not sure if the practice being employed is sound. In essence, what I am trying to do is have the 'Parameter' class reference select elements in the 'States' class, so variables in States can be changed via Parameters. Questions I have: is the approach taken OK? If not, is there a better way to achieve what I am aiming for? Example code: struct VAR_TYPE{ public: bool is_fixed; // If is_fixed = true, then variable is a parameter double value; // Numerical value std::string name; // Description of variable (to identify it by name) }; struct NODE{ public: VAR_TYPE X, Y, Z; /* VAR_TYPE is a structure of primitive types */ }; class States{ private: std::vector <NODE_ptr> node; // shared ptr to struct NODE std::vector <PROP_DICTIONARY_ptr> property; // CAN NOT be part of Parameter std::vector <ELEMENT_ptr> element; // CAN NOT be part of Parameter public: /* ect */ void set_X_reference ( Parameter &T , int i ) { T.push_var( &node[i]->X ); } void set_Y_reference ( Parameter &T , int i ) { T.push_var( &node[i]->Y ); } void set_Z_reference ( Parameter &T , int i ) { T.push_var( &node[i]->Z ); } bool get_node_bool_X( int i ) { return node[i]->X.is_fixed; } // repeat for Y and Z }; class Parameter{ private: std::vector <VAR_TYPE*> var; public: /* ect */ }; int main(){ States S; Parameter P; /* Here I initialize and set S, and do other stuff */ // Now I assign components in States to Parameters for(int n=0 ; n<S.size_of_nodes() ; n++ ){ if ( S.get_node_bool_X(n)==true ){ S.set_X_reference ( P , n ); }; // repeat if statement for Y and Z }; /* Now P points selected to data in S, and I can * modify the contents of S through P */ return 0; }; Update The reason this issue cropped up is I am working with Fortran legacy code. To sum up this Fotran code - it's a numerical simulation of a flight vehicle. This code has a fairly rigid procedural framework one must work within, which comes with a pre-defined list of allowable Fortran types. The Fortran glue code can create an instance of a C++ object (in actuality, a reference from the perspective of Fortran), but is not aware what is contained in it (other means are used to extract C++ data into Fortran). The problem that I encountered is when a C++ module is dynamically linked to the Fortran glue code, C++ objects have to be initialized each instance the C++ code is called. This happens by virtue of how the Fortran template is defined. To avoid this cycle of re-initializing objects, I plan to use 'State' as a container class. The Fortran code allows a 'State' object, which has an arbitrary definition; but I plan to use it to harness all relevant information about the model. The idea is to use the Parameters class (which is exposed and updated by the Fortran code) to update variables in States.

    Read the article

  • System Center 2012 Service Manager change request status stuck at new

    - by Chuck Herrington
    The guy that built and setup this system left rather abruptly and I've taken over. My current issues are I have several change requests that are stuck at New. They do not move to Pending or In Progress. The system is not sending emails when incidents are getting assigned to people. This used to work on this system. I have done a lot of searching and the usual solution to this of stopping and restarting the system center services does not help. Can anyone give me any ideas of where else to look? Update: From all the searching I have done it seemed like I was at the point of re-installing. My initial installation of SCSM 2012 was on a machine that was upgraded from SCSM 2010 and also hosted SCCM 2007 and WSUS. We decided to give it a fresh start on a new server by installing a second instance of the SCSM server on a brand new 2008 R2 server then promoting the new server to the workflow master using the procedures outlined in this article - Dealing with Multiple management Servers. I've gotten to the point where we have both the old and the new server up and the new server has been promoted. I had hopped to get spammed by emails all the sudden due to the workflow taking off, but no such luck. Once all the clients are reconfigured to point to the new server we still plan to decommission the old server but at this point it seems to be that the problem is in the database. Short of any other input from the community, my next plan is to install a 180 day trial on a test server, complete with a separate database so that I can do a side by side comparison between a completely fresh install and what I have now and see if I can find any differences. While that install is running I also plan on investigating the event logs to see if there is anything in there that can shed some light on what is happening on the new server. Update 2: So I've now got a test SCSM server up with a completely fresh install including Database and it seems to be able to transition Change Requests from New to In Progress. I'm attempting to find differences between the two. Stay Tuned! Update 3: In looking through the event log on the new SCSM machine i discovered: Log Name: Operations Manager Source: OpsMgr Root Connector Date: 10/9/2013 3:48:18 PM Event ID: 28000 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: scsm02 Description: The Root connector received an exception from the SDK Service while submitting task status: Cannot set availability on a health service that doesn't exist. This lead me to Event ID 2800 logged after installing secondary server for System Center 2012 Service Manager SP1. I contacted MS to obtain the hotfix, BIG warning here, turns out the hotfix is not so "hot". In order to apply this hotfix, you have to uninstall then reinstall using the files they supply. :( This is where I am at now ... Update 4: Not much luck after the re-install. The errors in the event log have gone away on the new server but the workflows still aren't running and neither the event log nor the workflow status screen seem to indicate why. I've done a comparison of the Activity and the Change Request Event Workflows and I've removed everything from the production system that is not in my fresh test system (which is everything), shut down the services, cleared out the cache folders and restarted the services and still no joy. At the moment the only thing I can think to do is either a)nuke the entire system including the database and start over, losing all of our data in the process or b)contact MS (which is probably going to cost us a butt load of money and time in the end to only advise us to do the same thing. Maybe more idea's will come after coffee ... No answers came after coffee. Attempting to contact MS. Managed to get to their first line of defense, gave them our SA number and someone is supposed to call me back. I am trying to log into my incident on their site to update my ticket with the link to this thread but when i click on the link in the email they sent me it goes to a "Sorry, the page you requested is not available" page ... Linux is looking better and better all the time.

    Read the article

  • .Net Crystal Report printing application running on termianal service connection errors when session

    - by MrEdmundo
    I have created a .Net application to run on an App Server that gets requests for a report and prints out the requested report. The C# application uses Crystal Reports to load the report and subsequently print it out. The application is run on Server which is connected to via a Remote Desktop connection under a particular user account (required for old apps). When I disconnect from the Remote Session the application starts raising exceptions such as: Message: CrystalDecisions.Shared.CrystalReportsException: Load report failed This type of error is never raised when the Remote Session is active. The server running the app is running Windows Server 2003, my box which creates the connection is Windows XP. I appreciate this is fairly weird, however I cannot see any problem with the application deployment I have created. Does anyone know what could be cause this issue? EDIT: I bit the bullet and created the application as a windows service, obviously this doesn't take long I just wasn't convinced it would solve the problem. Anyway it doesn't!!! I have also tried removing the multi-thread code that was calling the print function asynchronously. I did this in order to simply the app and narrow down the reason it could fail. Anyway, this didn't improve the situation either! EDIT: The two errors I get are: System.Runtime.InteropServices.COMException (0x80000201): Invalid printer specified. at CrystalDecisions.ReportAppServer.Controllers.PrintOutputControllerClass.ModifyPrinterName(String newVal) at CrystalDecisions.CrystalReports.Engine.PrintOptions.set_PrinterName(String value) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) The printer isn't invalid, this is confirmed when 60 seconds later the time ticks and the report is printed successfully. And The request could not be submitted for background processing. at CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.GetLastPageNumber(RequestContext pRequestContext) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) --- End of inner exception stack trace --- at CrystalDecisions.ReportAppServer.ConvertDotNetToErom.ThrowDotNetException(Exception e) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) at CrystalDecisions.CrystalReports.Engine.FormatEngine.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at CrystalDecisions.CrystalReports.Engine.ReportDocument.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) EDIT: I ran filemon to check if there were any access issue. At the point when the error occurs file mon reports Request: OPEN | Path: C:\windows\assembly\gac_msil\system\2.0.0.0__b77a5c561934e089\ws2_32.dll | Result: NOT FOUND | Other: Attributes Error

    Read the article

  • Problems connecting to WCF Service via NetNamedPipeBinding

    - by John
    I'm having trouble figuring out how to get a named pipe WCF service to work. The service is in a seperate assembly from the executable. The config looks like this: <system.serviceModel> <bindings> <netNamedPipeBinding> <binding name="NoSecurityIPC"> <security mode="None" /> </binding> </netNamedPipeBinding> </bindings> <client> <endpoint name="internal" address="channel1" binding="netNamedPipeBinding" bindingConfiguration="NoSecurityIPC" contract="conplement.TimeService.ICpTimeService" /> </client> <services> <service name="cpTimeService"> <host> <baseAddresses> <add baseAddress="net.pipe://localhost/" /> </baseAddresses> </host> <endpoint address="channel1" binding="netNamedPipeBinding" bindingConfiguration="NoSecurityIPC" contract="conplement.TimeService.ICpTimeService" /> </service> </services> </system.serviceModel> I'm using a ChannelFactory to create a proxy to access the service host: ServiceHost h = new ServiceHost(typeof(TimeService), new Uri("net.pipe://localhost/")); h.AddServiceEndpoint(typeof(ITimeService), new NetNamedPipeBinding("NoSecurityIPC"), "net.pipe://localhost/"); h.Open(); ChannelFactory<ITimeService> factory = new ChannelFactory<ITimeService>("channel1", new EndpointAddress(new Uri("net.pipe://localhost/"))); ICpTimeService proxy = factory.CreateChannel(); using (proxy as IDisposable) { this.ds = proxy.LoadData(); } I'm not sure what I'm doing wrong when I create the ChannelFactory. It can't seem to find the "channel1" in the config. When I create my binding manually and pass it to the ChannelFactory constructor, the factory and the proxy are created but the call to the LoadData() fails (times out). Can anyone see what I'm doing wrong here?

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • WCF - Multiple schema HTTP and HTTPS in the same service

    - by Ender
    I am trying to set up WCF service in production. The service has two bindings with two different interfaces. One endpoint (basicHttpBinding) is set up at HTTP and the other endpoint (wsHttpBinding) is set up securely over SSL. I can't get this scenario to work. Everything works with no problem if both endpoints are set up over HTTP. Before I even get into the specifics of errors I get, is is possible to run secure and insecure endpoint over the same service ? Here is a brief description of my configuration: <serviceBehaviors> <behavior name="MyServiceBehavior"> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" /> <serviceCredentials> <serviceCertificate findValue="123312123123123123123399451b178" storeLocation="LocalMachine" storeName="My" x509FindType="FindByThumbprint" /> <issuedTokenAuthentication allowUntrustedRsaIssuers="true"/> </serviceCredentials> </behavior> </serviceBehaviors> <bindings> <basicHttpBinding> <binding name="basicHttpBinding" maxReceivedMessageSize="2147483647"> </binding> </basicHttpBinding> <wsHttpBinding> <binding name="wsHttpBinding" maxReceivedMessageSize="2147483647"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName" establishSecurityContext="False"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="MyServiceBehavior" name="MyService"> <endpoint binding="wsHttpBinding" bindingConfiguration="wsHttpBinding" contract="IMyService1"> </endpoint> <endpoint address="mms" binding="basicHttpBinding" bindingConfiguration="basicHttpBinding" contract="IMyService2"> </endpoint> <endpoint address="mex" listenUri="" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> Thanks !

    Read the article

  • Using the Data Form Web Part (SharePoint 2010) Site Agnostically!

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/24/154465.aspxAs a Developer whom has worked closely with web designers (Power users) in a SharePoint environment, I have come across the issue of making the Data Form Web Part reusable across the site collection! In SharePoint 2007 it was very easy and this blog pointed the way to make it happen: Josh Gaffey's Blog. In SharePoint 2010 something changed! This method failed except for using a Data Form Web Part that pointed to a list in the Site Collection Root! I am making this discussion relative to a developer whom creates a solution (WSP) with all the artifacts embedded and the user shouldn’t have any involvement in the process except to activate features. The Scenario: 1. A Power User creates a Data Form Web Part using SharePoint Designer 2010! It is a great web part the uses all the power of SharePoint Designer and XSLT (Conditional formatting, etc.). 2. Other Users in the site collection want to use that specific web part in sub sites in the site collection. Pointing to a list with the same name, not at the site collection root! The Issues: 1. The Data Form Web Part Data Source uses a List ID (GUID) to point to the specific list. Which means a list in a sub site will have a list with a new GUID different than the one which was created with SharePoint Designer! Obviously, the List needs to be the same List (Fields, Content Types, etc.) with different data. 2. How can we make this web part site agnostic, and dependent only on the lists Name? I had this problem come up over and over and decided to put my solution forward! The Solution: 1. Use the XSL of the Data Form Web Part Created By the Power User in SharePoint Designer! 2. Extend the OOTB Data Form Web Part to use this XSL and Point to a List by name. The solution points to a hybrid solution that requires some coding (Developer) and the XSL (Power User) artifacts put together in a Visual Studio SharePoint Solution. Here are the solution steps in summary: 1. Create an empty SharePoint project in Visual Studio 2. Create a Module and Feature and put the XSL file created by the Power User into it a. Scope the feature to web 3. Create a Feature Receiver to Create the List. The same list from which the Data Form Web Part was created with by the Power User. a. Scope the feature to web 4. Create a Web Part extending the Data Form Web a. Point the Data Form Web Part to point to the List by Name b. Point the Data Form Web Part XSL link to the XSL added using the Module feature c. Scope The feature to Site i. This is because all web parts are in the site collection web part gallery. So in a Narrative Summary: We are creating a list in code which has the same name and (site Columns) as the list from which the Power User created the Data Form Web Part Using SharePoint Designer. We are creating a Web Part in code which extends the OOTB Data Form Web Part to point to a list by name and use the XSL created by the Power User. Okay! Here are the steps with images and code! At the end of this post I will provide a link to the code for a solution which works in any site! I want to TOOT the HORN for the power of this solution! It is the mantra a use with all my clients! What is a basic skill a SharePoint Developer: Create an application that uses the data from a SharePoint list and make that data visible to the user in a manner which meets requirements! Create an Empty SharePoint 2010 Project Here I am naming my Project DJ.DataFormWebPart Create a Code Folder Copy and paste the Extension and Utilities classes (Found in the solution provided at the end of this post) Change the Namespace to match this project The List to which the Data Form Web Part which was used to make the XSL by the Power User in SharePoint Designer is now going to be created in code! If already in code, then all the better! Here I am going to create a list in the site collection root and add some data to it! For the purpose of this discussion I will actually create this list in code before using SharePoint Designer for simplicity! So here I create the List and deploy it within this solution before I do anything else. I will use a List I created before for demo purposes. Footer List is used within the footer of my master page. Add a new Feature: Here I name the Feature FooterList and add a Feature Event Receiver: Here is the code for the Event Receiver: I have a previous blog post about adding lists in code so I will not take time to narrate this code: using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; using DJ.DataFormWebPart.Code; namespace DJ.DataFormWebPart.Features.FooterList { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("a58644fd-9209-41f4-aa16-67a53af7a9bf")] public class FooterListEventReceiver : SPFeatureReceiver { SPWeb currentWeb = null; SPSite currentSite = null; const string columnGroup = "DJ"; const string ctName = "FooterContentType"; // Uncomment the method below to handle the event raised after a feature has been activated. public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { using (SPSite site = new SPSite(spWeb.Site.ID)) { using (SPWeb rootWeb = site.OpenWeb(site.RootWeb.ID)) { //add the fields addFields(rootWeb); //add content type SPContentType testCT = rootWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(rootWeb, ctName); } if ((spWeb.Lists.TryGetList("FooterList") == null)) { //create the list if it dosen't to exist CreateFooterList(spWeb, site); } } } } } #region ContentType public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Link", SPFieldType.URL, false, columnGroup); Utilities.addField(spWeb, "Information", SPFieldType.Text, false, columnGroup); } private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Item"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Link", ct); Utilities.addContentTypeLink(spWeb, "Information", ct); } private void CreateFooterList(SPWeb web, SPSite site) { Guid newListGuid = web.Lists.Add("FooterList", "Footer List", SPListTemplateType.GenericList); SPList newList = web.Lists[newListGuid]; newList.ContentTypesEnabled = true; var footer = site.RootWeb.ContentTypes[ctName]; newList.ContentTypes.Add(footer); newList.ContentTypes.Delete(newList.ContentTypes["Item"].Id); newList.Update(); var view = newList.DefaultView; //add all view fields here //view.ViewFields.Add("NewsTitle"); view.ViewFields.Add("Link"); view.ViewFields.Add("Information"); view.Update(); } } } Basically created a content type with two site columns Link and Information. I had to change some code as we are working at the SPWeb level and need Content Types at the SPSite level! I’ll use a new Site Collection for this demo (Best Practice) keep old artifacts from impinging on development: Next we will add this list to the root of the site collection by deploying this solution, add some data and then use SharePoint Designer to create a Data Form Web Part. The list has been added, now let’s add some data: Okay let’s add a Data Form Web Part in SharePoint Designer. Create a new web part page in the site pages library: I will name it TestWP.aspx and edit it in advanced mode: Let’s add an empty Data Form Web Part to the web part zone: Click on the web part to add a data source: Choose FooterList in the Data Source menu: Choose appropriate fields and select insert as multiple item view: Here is what it look like after insertion: Let’s add some conditional formatting if the information filed is not blank: Choose Create (right side) apply formatting: Choose the Information Field and set the condition not null: Click Set Style: Here is the result: Okay! Not flashy but simple enough for this demo. Remember this is the job of the Power user! All we want from this web part is the XLS-Style Sheet out of SharePoint Designer. We are going to use it as the XSL for our web part which we will be creating next. Let’s add a web part to our project extending the OOTB Data Form Web Part. Add new item from the Visual Studio add menu: Choose Web Part: Change WebPart to DataFormWebPart (Oh well my namespace needs some improvement, but it will sure make it readily identifiable as an extended web part!) Below is the code for this web part: using System; using System.ComponentModel; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Text; namespace DJ.DataFormWebPart.DataFormWebPart { [ToolboxItemAttribute(false)] public class DataFormWebPart : Microsoft.SharePoint.WebPartPages.DataFormWebPart { protected override void OnInit(EventArgs e) { base.OnInit(e); this.ChromeType = PartChromeType.None; this.Title = "FooterListDF"; try { //SPSite site = SPContext.Current.Site; SPWeb web = SPContext.Current.Web; SPList list = web.Lists.TryGetList("FooterList"); if (list != null) { string queryList1 = "<Query><Where><IsNotNull><FieldRef Name='Title' /></IsNotNull></Where><OrderBy><FieldRef Name='Title' Ascending='True' /></OrderBy></Query>"; uint maximumRowList1 = 10; SPDataSource dataSourceList1 = GetDataSource(list.Title, web.Url, list, queryList1, maximumRowList1); this.DataSources.Add(dataSourceList1); this.XslLink = web.Url + "/Assests/Footer.xsl"; this.ParameterBindings = BuildDataFormParameters(); this.DataBind(); } } catch (Exception ex) { this.Controls.Add(new LiteralControl("ERROR: " + ex.Message)); } } private SPDataSource GetDataSource(string dataSourceId, string webUrl, SPList list, string query, uint maximumRow) { SPDataSource dataSource = new SPDataSource(); dataSource.UseInternalName = true; dataSource.ID = dataSourceId; dataSource.DataSourceMode = SPDataSourceMode.List; dataSource.List = list; dataSource.SelectCommand = "" + query + ""; Parameter listIdParam = new Parameter("ListID"); listIdParam.DefaultValue = list.ID.ToString( "B").ToUpper(); Parameter maximumRowsParam = new Parameter("MaximumRows"); maximumRowsParam.DefaultValue = maximumRow.ToString(); QueryStringParameter rootFolderParam = new QueryStringParameter("RootFolder", "RootFolder"); dataSource.SelectParameters.Add(listIdParam); dataSource.SelectParameters.Add(maximumRowsParam); dataSource.SelectParameters.Add(rootFolderParam); dataSource.UpdateParameters.Add(listIdParam); dataSource.DeleteParameters.Add(listIdParam); dataSource.InsertParameters.Add(listIdParam); return dataSource; } private string BuildDataFormParameters() { StringBuilder parameters = new StringBuilder("<ParameterBindings><ParameterBinding Name=\"dvt_apos\" Location=\"Postback;Connection\"/><ParameterBinding Name=\"UserID\" Location=\"CAMLVariable\" DefaultValue=\"CurrentUserName\"/><ParameterBinding Name=\"Today\" Location=\"CAMLVariable\" DefaultValue=\"CurrentDate\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_firstrow\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_nextpagedata\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocmode\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocfiltermode\" Location=\"Postback;Connection\"/>"); parameters.Append("</ParameterBindings>"); return parameters.ToString(); } } } The OnInit method we use to set the list name and the XSL Link property of the Data Form Web Part. We do not have the link to XSL in our Solution so we will add the XSL now: Add a Module in the Visual Studio add menu: Rename Sample.txt in the module to footer.xsl and then copy the XSL from SharePoint Designer Look at elements.xml to where the footer.xsl is being provisioned to which is Assets/footer.xsl, make sure the Web parts xsl link is pointing to this url: Okay we are good to go! Let’s check our features and package: DataFormWebPart should be scoped to site and have the web part: The Footer List feature should be scoped to web and have the Assets module (Okay, I see, a spelling issue but it won’t affect this demo) If everything is correct we should be able to click a couple of sub site feature activations and have our list and web part in a sub site. (In fact this solution can be activated anywhere) Here is the list created at SubSite1 with new data It. Next let’s add the web part on a test page and see if it works as expected: It does! So we now have a repeatable way to use a WSP to move a Data Form Web Part around our sites! Here is a link to the code: DataFormWebPart Solution

    Read the article

  • Calling a .NET web service (WSE 3.0, WS-Security) from JAXWS-RI

    - by elduff
    I'm writing a JAXWS-RI client that must call a .NET Web Service that is using WS-Security. The service's WSDL does not contain any WS-Security info, but I have an example soap message from the service's authors and know that I must include wsse:Security headers, including X:509 tokens. I've been researching, and I've seen example of folks calling this type of web service from Axis and CXF (in conjunction with Rampart and/or WSS4J), but nothing about using plain JAXWS-RI itself. However, I'm (unfortunately) constrained to using JAXWS-RI by my gov't client. Does anyone have any examples/documentation of doing this from JAXWS-RI? I need to ultimately generate a SOAP header that looks something like the one below - this is a sample soap:header from a .NET client written by the service's authors. (Note: I've put the 'VALUE_HERE' string in places where I need to provide my own values) <soapenv:Envelope xmlns:iri="http://EOIR/IRIES" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401- wss-wssecurity-secext-1.0.xsd"> <xenc:EncryptedKey Id="VALUE_HERE"> <xenc:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-oaep-mgf1p"/> <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <wsse:SecurityTokenReference> <wsse:KeyIdentifier EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"> VALUE_HERE </wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> <xenc:CipherData> <xenc:CipherValue>VALUE_HERE</xenc:CipherValue> </xenc:CipherData> <xenc:ReferenceList> <xenc:DataReference URI="#EncDataId-8"/> </xenc:ReferenceList> </xenc:EncryptedKey> </wsse:Security>

    Read the article

  • How to make custom WCF error handler return JSON response with non-OK http code?

    - by John
    I'm implementing a RESTful web service using WCF and the WebHttpBinding. Currently I'm working on the error handling logic, implementing a custom error handler (IErrorHandler); the aim is to have it catch any uncaught exceptions thrown by operations and then return a JSON error object (including say an error code and error message - e.g. { "errorCode": 123, "errorMessage": "bla" }) back to the browser user along with an an HTTP code such as BadRequest, InteralServerError or whatever (anything other than 'OK' really). Here is the code I am using inside the ProvideFault method of my error handler: fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var rmp = new HttpResponseMessageProperty(); rmp.StatusCode = System.Net.HttpStatusCode.InternalServerError; rmp.Headers.Add(HttpRequestHeader.ContentType, "application/json"); fault.Properties.Add(HttpResponseMessageProperty.Name, rmp); -- This returns with Content-Type: application/json, however the status code is 'OK' instead of 'InternalServerError'. fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var rmp = new HttpResponseMessageProperty(); rmp.StatusCode = System.Net.HttpStatusCode.InternalServerError; //rmp.Headers.Add(HttpRequestHeader.ContentType, "application/json"); fault.Properties.Add(HttpResponseMessageProperty.Name, rmp); -- This returns with the correct status code, however the content-type is now XML. fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var response = WebOperationContext.Current.OutgoingResponse; response.ContentType = "application/json"; response.StatusCode = HttpStatusCode.InternalServerError; -- This returns with the correct status code and the correct content-type! The problem is that the http body now has the text 'Failed to load source for: http://localhost:7000/bla..' instead of the actual JSON data.. Any ideas? I'm considering using the last approach and just sticking the JSON in the HTTP StatusMessage header field instead of in the body, but this doesn't seem quite as nice?

    Read the article

  • Lost in Translation – Common Mistakes Interpreting Patterns – Mark Simpson, Griffiths-Waite @ SOA, Cloud & Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 For details please visit the registration page International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Speaker: Mark Simpson, Griffiths-Waite Mark has been specialising in Oracle technology for 13 years, the last 10 of these with Griffiths Waite. Mark leads our SOA technology practice (covering SOA, Business Process Management and Enterprise Architecture). He is a much sought after presenter on the Oracle and SOA conference circuits, and a respected authority on these technologies. Mark has advised a host of UK leading organisations on the deployment of BPM / SOA solutions. Working closely with Oracle US Product Development Mark has contributed to Oracle's SOA Methodology and Oracle's SOA Maturity Model. Lost in Translation – Common Mistakes Interpreting Patterns Learn how small misinterpretations of high-level design patterns can have large and costly project ramifications. Good SOA design benefits from the use of a reference architecture and standardised design patterns. However both of these concepts give an abstracted view of the intended solution, which needs to be interpreted to become realised. A reference implementation is important to demonstrate how key design guidelines can be implemented in the toolset of choice, but the main success factor is how these are used through the build and post live phases of the project. This session will introduce practical design patterns with supporting implementation examples that, if used correctly, will give long term benefit. We will highlight implementations where misinterpretations or misalignment from pattern aims have led to issues post implementation. The session will add depth to the pattern discussions you are already having enabling confidence in proceeding to the next level of realisation whilst considering how they may be implemented within your solution and chosen toolset. September 25, 2012 - 13:55 KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Mark Simpson,Griffiths Waite,SOA Patterns,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • SharePoint 2010 Custom WCF Service - Windows and FBA Authentication

    - by e-rock
    I have SharePoint 2010 configured for Claims Based Authentication with both Windows and Forms Based Authentication (FBA) for external users. I also need to develop custom WCF Services. The issue is that I want Windows credentials passed into the WCF Service(s); however, I cannot seem to get the Windows credentials passed into the services. My custom WCF service appears to be using Anonymous authentication (which has to be enabled in IIS in order to display the FBA login screen). The example I have tried to follow is found at http://msdn.microsoft.com/en-us/library/ff521581.aspx. The WCF service gets deployed to _vti_bin (ISAPI folder). Here is the code for the .svc file <%@ ServiceHost Language="C#" Debug="true" Service="MyCompany.CustomerPortal.SharePoint.UI.ISAPI.MyCompany.Services.LibraryManagers.LibraryUploader, $SharePoint.Project.AssemblyFullName$" Factory="Microsoft.SharePoint.Client.Services.MultipleBaseAddressBasicHttpBindingServiceHostFactory, Microsoft.SharePoint.Client.ServerRuntime, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" CodeBehind="LibraryUploader.svc.cs" %> Here is the code behind for the .svc file [ServiceContract] public interface ILibraryUploader { [OperationContract] string SiteName(); } [BasicHttpBindingServiceMetadataExchangeEndpoint] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] public class LibraryUploader : ILibraryUploader { //just try to return site title right now… public string SiteName() { WindowsIdentity identity = ServiceSecurityContext.Current.WindowsIdentity; ClaimsIdentity claimsIdentity = new ClaimsIdentity(identity); return SPContext.Current.Web.Title; } } The WCF test client I have just to test it out (WPF app) uses the following code to call the WCF service... private void Button1Click(object sender, RoutedEventArgs e) { BasicHttpBinding binding = new BasicHttpBinding(); binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm; EndpointAddress endpoint = new EndpointAddress( "http://dev.portal.data-image.local/_vti_bin/MyCompany.Services/LibraryManagers/LibraryUploader.svc"); LibraryUploaderClient libraryUploader = new LibraryUploaderClient(binding, endpoint); libraryUploader.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation; MessageBox.Show(libraryUploader.SiteName()); } I am somewhat inexperienced with IIS security settings/configurations when it comes to Claims and trying to use both Windows and FBA. I am also inexperienced when it comes to WCF configurations for security. I usually develop internal biz apps and let Visual Studio decide what to use because security is rarely a concern.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >