Search Results

Search found 20426 results on 818 pages for 'service packs'.

Page 173/818 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • How do you store accented characters coming from a web service into a database?

    - by Thierry Lam
    I have the following word that I fetch via a web service: André From Python, the value looks like: "Andr\u00c3\u00a9". The input is then decoded using json.loads: >>> import json >>> json.loads('{"name":"Andr\\u00c3\\u00a9"}') >>> {u'name': u'Andr\xc3\xa9'} When I store the above in a utf8 MySQL database, the data is stored like the following using Django: SomeObject.objects.create(name=u'Andr\xc3\xa9') Querying the name column from a mysql shell or displaying it in a web page gives: André The web page displays in utf8: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> My database is configured in utf8: mysql> SHOW VARIABLES LIKE 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_unicode_ci | | collation_server | utf8_unicode_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ 8 rows in set (0.00 sec) How can I retrieve the word André from a web service, store it properly in a database with no data loss and display it on a web page in its original form?

    Read the article

  • Calling services from the Orchestrating layer in SOA?

    - by Martin Lee
    The Service Oriented Architecture Principles site says that Service Composition is an important thing in SOA. But Service Loose Coupling is important as well. Does that mean that the "Orchestrating layer" should be the only one that is allowed to make calls to services in the system? As I understand SOA, the "Orchestrating layer" 'glues' all the services together into one software application. I tried to depict that on Fig.A and Fig.B. The difference between the two is that on Fig.A the application is composed of services and all the logic is done in the "Orchestrating layer" (all calls to services are done from the "Orchestrating layer" only). On Fig.B the application is composed from services, but one service calls another service. Does the architecture on Fig.B violate the "Service Loose Coupling" principle of SOA? Can a service call another service in SOA? And more generally, can the architecture on Fig.A be considered superior to the one on Fig.B in terms of service loose coupling, abstraction, reusability, autonomy, etc.? My guess is that the A architecture is much more universal, but it can add some unnecessary data transfers between the "Orchestrating layer" and all the called services.

    Read the article

  • How to add a service to the type descriptor context of a property grid in .Net?

    - by Jules
    I have an app that allows the user to choose an image, at design time, either as a straight image, or from an image list. All cool so far, except that this is not happening from the visual studio property browser, its happening from a property grid that is a part of a type editor. My problem is, both the image picker (actually resource picker), and the imagelist type converter rely on some design-time services to get the job done. In the case of imagelist, its the IReferenceService and in the case of the resource picker its a service called _DTE. In the first instance of an edit from the visual studio property browser, I could get a reference to these services but (1) how can I add them to the type descriptor context of my property grid? It would be better, for future proofing, if I could just copy a reference to all of the services in the type descriptor context. (2) Where does the property browser get these services from in the first place? ETA: I still don't know how to do it, but I now know it is possible. (1) Sub-class control and add a property whose type is an array of buttons. (2) Add it to a form. (3) Select the new control on the design service and edit the new property in the property browser. (4) The collection editor dialog pops-up (5) Add a button (6) Edit image and image list - the type editor and type converter, respectively, behave as they should.

    Read the article

  • How to start with entity framework and service oriented architecture?

    - by citronas
    At work I need to create a new web application, that will connection to an MySql Database. (So far I only have expercience with Linq-To-Sql classes and MSSQL Servers.) My superior tells me to use the entity framework (he probably refers to Linq-To-Entity) and provide everything as a service based architecture. Unfortunatly nobody at work has experience with that framework and with a real nice server oriented architecture. (till now no customer wanted to pay for architecture that he can't see. This speficic project I'm leading will be long-term, meaning multiple years, so it would be best to design it the way, that multiple targetting plattforms like asp.net, c# wpf, ... could use it) For now, the main target plattform is ASP.net So I do have the following questions: 1) Where can I read best what's really behind service oriented architecture (but for now beginner tutorials work fine as well) and how to do it in best practise? 2) So far I can't seem a real difference between Linq-To-Sql classes and the information I've google so far on the 'entity framework'. So, whats the difference? Where do I find nice tutorials for it. 3) Is there any difference in the entity framework regarding the database server (MSSQL or MySQL). If not, does that mean that code snipperts I will stumble across will word database independent? 4) I do you Visual Studio 2010. Do I have to regard something specific?

    Read the article

  • What are good strategies for organizing single class per query service layer?

    - by KallDrexx
    Right now my Asp.net MVC application is structured as Controller - Services - Repositories. The services consist of aggregate root classes that contain methods. Each method is a specific operation that gets performed, such as retrieving a list of projects, adding a new project, or searching for a project etc. The problem with this is that my service classes are becoming really fat with a lot of methods. As of right now I am separating methods out into categories separated by #region tags, but this is quickly becoming out of control. I can definitely see it becoming hard to determine what functionality already exists and where modifications need to go. Since each method in the service classes are isolated and don't really interact with each other, they really could be more stand alone. After reading some articles, such as this, I am thinking of following the single query per class model, as it seems like a more organized solution. Instead of trying to figure out what class and method you need to call to perform an operation, you just have to figure out the class. My only reservation with the single query per class method is that I need some way to organize the 50+ classes I will end up with. Does anyone have any suggestions for strategies to best organize this type of pattern?

    Read the article

  • Batch file writing to log then ending process

    - by Andrew Service
    I have a batch file that calls a process and in that process I have: IF %ERRORLEVEL% NEQ 0 EXIT /B %ERRORLEVEL% Now I wanted to upgrade this a bit and give some meaningful message to an output log when if the process fails, also I do not want the main batch to continue processing because the next processes are dependent on data from previous calls; I wonder if this would be correct but not sure: CALL Process 1 IF %ERRORLEVEL% NEQ 0 GOTO ErrorInfirstProcess /B %ERRORLEVEL% :ErrorInfirstProcess ECHO Process 1 Failed on %Date% at %Time%. >>C:\Log.txt" CALL Process 2 IF %ERRORLEVEL% NEQ 0 GOTO ErrorInSecondProcess /B %ERRORLEVEL% :ErrorInSecondProcess ECHO Process 2 Failed on %Date% at %Time%. >>C:\Log.txt" I also want to know if I still need the /B or do I need to put an EXIT command after the echo? Thanks A

    Read the article

  • How to send KeyEvents through an input method service to a Dialog, or a Spinner menu?

    - by shutdown11
    I'm trying to implement an input method service that receives intents sent by a remote client, and in response to those sends an appropriate KeyEvent. I'm using in the input method service this method private void keyDownUp(int keyEventCode) { getCurrentInputConnection().sendKeyEvent( new KeyEvent(KeyEvent.ACTION_DOWN, keyEventCode)); getCurrentInputConnection().sendKeyEvent( new KeyEvent(KeyEvent.ACTION_UP, keyEventCode)); } to send KeyEvents as in the Simple Sofykeyboard Sample, and it works in the home, in Activities... but it doesn't works when a Dialog or the menu of a Spinner is in foreground. The events is sent to the parent activity behind the Dialog. Is there any way to send keys and control the device like using the hardware keys from an input method? Better explanation on what I'm trying to do: I am kind of writng an Input Method that allows to control the device from remote. I write in a client (a java application on my desktop pc) a command (for example "UP"), a server on the device with sendBroadcast() sends the intent with the information, and a receiver in the input method gets it and call keyDownUp with the keycode of the DPAD_UP key. It generally works, but when I go to an app that shows a dialog, the keyDownUp method don't sends the key event to the dialog, for example for select the yes or not buttons, but it keeps to control the activty behind the Dialog.

    Read the article

  • CXF code first service, WSDL generation; soap:address changes?

    - by jcalvert
    I have a simple Java interface/implementation I am exposing via CXF. I have a jaxws element in my Spring configuration file like this: <jaxws:endpoint id="managementServiceJaxws" implementor="#managementService" address="/jaxws/ManagementService" > </jaxws:endpoint> It generates the WSDL from my annotated interface and exposes the service. Then when I hit http://myhostname/cxf/jaxws/ManagementService?wsdl I get a lovely WSDL. At the bottom in the wsdl:service element, I'll see <soap:address location="http://myhostname/cxf/jaxws/ManagementService"/> However, some time a day or so later, with no application restart, hitting that same url produces: This causes a number of problems, but what I really want is to fix it. Right now, there's a particular client to the webservice that sets the endpoint to localhost; because it runs on the same machine. Is it possible the wsdl is getting regenerated and cached and then exposing the 'localhost' version? In part I don't know the exact mechanism by which one goes from a ?wsdl request in CXF to the response. It seems almost certain that it's retrieving some cached version, given that it's supposed to be determining the address by asking the servletcontainer (Jetty). For reference I know a stopgap solution is using the hostname on the client and making sure an alias in place so that it goes over the loopback. EDIT: For reference, I confirmed that if I bring my application up and first hit it over localhost, then querying for the wsdl via the hostname shows the address as localhost. Conversely, first hitting it over the hostname causes localhost requests to show the hostname. So obviously something is getting cached here.

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • Do you catch expected exceptions in the controller or business service of your asp.net mvc application

    - by Pascal
    I am developing an asp.net mvc application where user1 could delete data records which were just loaded before by user2. User2 either changes this non-existent data record (Update) or is doing an insert with this data in another table that a foreign-key constraint is violated. Where do you catch such expected exceptions? In the Controller of your asp.net mvc application or in the business service? Just a sidenote: I only catch the SqlException here if its a ForeignKey constraint exception to tell the user that another user has deleted a certain parent record and therefore he can not create the testplan. But this code is not fully implemented yet! Controller:   public JsonResult CreateTestplan(Testplan testplan)   {    bool success = false;    string error = string.Empty;    try   {    success = testplanService.CreateTestplan(testplan);    }   catch (SqlException ex)    {    error = ex.Message;    }    return Json(new { success = success, error = error }, JsonRequestBehavior.AllowGet);   } OR Business service: public Result CreateTestplan(Testplan testplan) { Result result = new Result(); try { using (var con = new SqlConnection(_connectionString)) using (var trans = new TransactionScope()) { con.Open(); _testplanDataProvider.AddTestplan(testplan); _testplanDataProvider.CreateTeststepsForTestplan(testplan.Id, testplan.TemplateId); trans.Complete(); result.Success = true; } } catch (SqlException e) { result.Error = e.Message; } return result; } then in the Controller: public JsonResult CreateTestplan(Testplan testplan)   {    Result result = testplanService.CreateTestplan(testplan);       return Json(new { success = result.success, error = result.error }, JsonRequestBehavior.AllowGet);   }

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • How to get a nicely formatted PHP Web Service response?

    - by Bruno
    I called an API like this: $service = new Class_Service(); $parameters = new GetClasses(); $parameters->Request = new GetClassesRequest(); $parameters->Request->SourceCredentials = new SourceCredentials(); $parameters->Request->SourceCredentials->SourceName = "Name"; $parameters->Request->SourceCredentials->Password = "Pass"; $parameters->Request->SourceCredentials->SiteIDs = array( 12 ); $classes = $service->GetClasses($parameters); var_dump($classes); And received a response like this: object(GetClassesResponse)#7 (1) { ["GetClassesResult"]=> object(GetClassesResult)#8 (6 { ["Classes"]=> object(stdClass)#9 (1) { ["Class"]=> array(25) { [0]=> object(Mi_Class)#10 (21) { ["ClassScheduleID"]=> int(15) ["Visits"]=> NULL ["Clients"]=> NULL ["Location"]=> object(Location)#11 (30) { ["BusinessID"]=> NULL ["SiteID"]=> int(12) ["BusinessDescription"]=> NULL ["AdditionalImageURLs"]=> object(stdClass)#12 (0) { } ["FacilitySquareFeet"]=> NULL Does a response normally look like this? How do I go about getting the data in a formatted manner?

    Read the article

  • What kind of online hosting do I need for a WCF-based service?

    - by mafutrct
    First of all, I'm not sure if SO is the right place to ask. Please migrate me if needed. I would like to host a WCF-based service so it is available for everyone. While hosting it on my personal, local servers succeeded, I would prefer to move it to an external service provider for various reasons. I'll be blunt: I have no clue about hosting providers. I know there are webhosters, virtual and root servers and several other services. What I would like to know is what kind of hosting I need in my case. I understand that a root server would easily fulfill my requirements, but that is not exactly cheap. The program I'd like to run on the server requires .NET 4, preferably on a windows machine. Access to a folder in the file system is much appreciated (1 GB storage is enough by far). Communication with clients (in form of an applications written in .NET) via opening a port on the server. Traffic is low (<<1 GB/month?) There is no website. Having the provider perform updates would be nice. My understanding is that a virtual server would be a possible solution. Prices seem start at around 5€/month, which is ok for me. However, I read that for these cheap solutions RAM is severely limited (~400 MB), and I'm not confident that is enough to run windows and a .NET application.

    Read the article

  • How to use custom attributes over a web service?

    - by gfeli
    Hi. I am currently trying to add a custom "column name" to a property in a web service. Here is my class. public class OrderCost { public int OrderNum { get; set; } public int OrderLine { get; set; } public int OrderRel { get; set; } public DateTime OrderDate { get; set; } public string PartNum { get; set; } public string Description { get; set; } public decimal Qty { get; set; } public string SalesUM { get; set; } public decimal Cost { get; set; } public decimal Price { get; set; } public decimal Net { get; set; } public decimal Margin { get; set; } public string EntryPerson { get; set; } public string CustID { get; set; } public string Customer { get; set; } } Basically I have another class (on the Silverlight side) that loops through all the properties and creates a column for each property. Thing is, I want to use a different name other than the name of the property. For example, I would like to show "Order Number" instead of OrderNum. I have attempted to use custom attributes but that does not seem to work. Is there way I can provide a different name to these properties over a web service with a use of an attribute? Is there another way I can achieve what I am trying to do?

    Read the article

  • Elegent methods for caching search results from RESTful service?

    - by Paul
    I have a RESTful web service which I access from the browser using JavaScript. As an example, say that this web service returns a list of all the Message resources assigned to me when I send a GET request to /messages/me. For performance reasons, I'd like to cache this response so that I don't have to re-fetch it every time I visit my Manage Messages web page. The cached response would expire after 5 minutes. If a Message resource is created "behind my back", say by the system admin, it's possible that I won't know about it for up to 5 minutes, until the cached search response expires and is re-fetched. This is acceptable, because it creates no confusion for me. However if I create a new Message resource which I know should be part of the search response, it becomes confusing when it doesn't appear on my Manage Messages page immediately. In general, when I knowingly create/delete/update a resource that invalidates a cached search response, I need that cached response to be expired/flushed immediately. The core problem which I can't figure out: I see no simple way of connecting the task of creating/deleting/updating a resource with the task of expiring the appropriate cached responses. In this example it seems simple, I could manually expire the cached search response whenever I create/delete/update a(ny) Message resource. But in a more complex system, keeping track of which search responses to expire under what circumstances will get clumsy quickly. If someone could suggest a simple solution or some clarifying thoughts, I'd appreciate it.

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • TDD and WCF behavior

    - by Frederic Hautecoeur
    Some weeks ago I wanted to develop a WCF behavior using TDD. I have lost some time trying to use mocks. After a while i decided to just use a host and a client. I don’t like this approach but so far I haven’t found a good and fast solution to use Unit Test for testing a WCF behavior. To Implement my solution I had to : Create a Dummy Service Definition; Create the Dummy Service Implementation; Create a host; Create a client in my test; Create and Add the behavior; Dummy Service Definition This is just a simple service, composed of an Interface and a simple implementation. The structure is aimed to be easily customizable for my future needs.   Using Clauses : 1: using System.Runtime.Serialization; 2: using System.ServiceModel; 3: using System.ServiceModel.Channels; The DataContract: 1: [DataContract()] 2: public class MyMessage 3: { 4: [DataMember()] 5: public string MessageString; 6: } The request MessageContract: 1: [MessageContract()] 2: public class RequestMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myRequest; 9: } The response MessageContract: 1: [MessageContract()] 2: public class ResponseMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myResponse; 9: } The ServiceContract: 1: [ServiceContract(Name="DummyService", Namespace="http://dummyservice",SessionMode=SessionMode.Allowed )] 2: interface IDummyService 3: { 4: [OperationContract(Action="Perform", IsOneWay=false, ProtectionLevel=System.Net.Security.ProtectionLevel.None )] 5: ResponseMessage DoThis(RequestMessage request); 6: } Dummy Service Implementation 1: public class DummyService:IDummyService 2: { 3: #region IDummyService Members 4: public ResponseMessage DoThis(RequestMessage request) 5: { 6: ResponseMessage response = new ResponseMessage(); 7: response.myHeader = "Response"; 8: response.myResponse = new MyMessage(); 9: response.myResponse.MessageString = 10: string.Format("Header:<{0}> and Request was <{1}>", 11: request.myHeader, request.myRequest.MessageString); 12: return response; 13: } 14: #endregion 15: } Host Creation The most simple host implementation using a Named Pipe binding. The GetBinding method will create a binding for the host and can be used to create the same binding for the client. 1: public static class TestHost 2: { 3: 4: internal static string hostUri = "net.pipe://localhost/dummy"; 5:  6: // Create Host method. 7: internal static ServiceHost CreateHost() 8: { 9: ServiceHost host = new ServiceHost(typeof(DummyService)); 10:  11: // Creating Endpoint 12: Uri namedPipeAddress = new Uri(hostUri); 13: host.AddServiceEndpoint(typeof(IDummyService), GetBinding(), namedPipeAddress); 14:  15: return host; 16: } 17:  18: // Binding Creation method. 19: internal static Binding GetBinding() 20: { 21: NamedPipeTransportBindingElement namedPipeTransport = new NamedPipeTransportBindingElement(); 22: TextMessageEncodingBindingElement textEncoding = new TextMessageEncodingBindingElement(); 23:  24: return new CustomBinding(textEncoding, namedPipeTransport); 25: } 26:  27: // Close Method. 28: internal static void Close(ServiceHost host) 29: { 30: if (null != host) 31: { 32: host.Close(); 33: host = null; 34: } 35: } 36: } Checking the service A simple test tool check the plumbing. 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Open(); 7:  8: using (ChannelFactory<IDummyService> channel = 9: new ChannelFactory<IDummyService>(TestHost.GetBinding() 10: , new EndpointAddress(TestHost.hostUri))) 11: { 12: IDummyService svc = channel.CreateChannel(); 13: try 14: { 15: RequestMessage request = new RequestMessage(); 16: request.myHeader = Guid.NewGuid().ToString(); 17: request.myRequest = new MyMessage(); 18: request.myRequest.MessageString = "I want some beer."; 19:  20: ResponseMessage response = svc.DoThis(request); 21: } 22: catch (Exception ex) 23: { 24: Assert.Fail(ex.Message); 25: } 26: } 27: host.Close(); 28: } 29: } Running the service should show that the client and the host are running fine. So far so good. Adding the Behavior Add a reference to the Behavior project and add the using entry in the test class. We just need to add the behavior to the service host : 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Description.Behaviors.Add(new MyBehavior()); 7: host.Open();¨ 8: …  If you set a breakpoint in your behavior and run the test in debug mode, you will hit the breakpoint. In this case I used a ServiceBehavior. To add an Endpoint behavior you have to add it to the endpoints. 1: host.Description.Endpoints[0].Behaviors.Add(new MyEndpointBehavior()) To add a contract or an operation behavior a custom attribute should work on the service contract definition. I haven’t tried that yet.   All the code provided in this blog and in the following files are for sample use. Improvements I don’t like to instantiate a client and a service to test my behaviors. But so far I have' not found an easy way to do it. Today I am passing a type of endpoint to the host creator and it creates the right binding type. This allows me to easily switch between bindings at will. I have used the same approach to test Mex Endpoints, another post should come later for this. Enjoy !

    Read the article

  • Shell script issue: cron job script to Restart MySQL server when it stops accidentally

    - by Straw Hat
    I have this script, I am using it to setup CRON job to execute this script, so it can check if MySQL service is running; if not then it restart the MySQL service: #!/bin/bash service mysql status| grep 'mysql start/running' > /dev/null 2>&1 if [ $? != 0 ] then sudo service mysql restart fi I have setup cron job as. sudo crontab -e and then added, */1 * * * * /home/ubuntu/mysql-check.sh Problem is that it restart MySQL on every cron job execution.. even if server is running it restart the MySQL service what is correction in the script to do that.

    Read the article

  • KnownType Not sufficient for Inclusion

    - by Kate at LittleCollie
    Why isn't the use of KnownType attribute in C# sufficient for inclusion of a DLL? Working with Visual Studio 2012 with TFS responsible for builds, I am on a project in which a service required use of this attribute as in the following: using Project.That.Contains.RequiredClassName; [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall, Namespace="SomeNamespace")] [KnownType(typeof(RequiredClassName))] public class Service : IService { } But to get the required DLL to be included in the bin output and therefore the installer from our production build, I had to add the follow to the constructor for Service: public Service() { // Exists only to force inclusion var ignore = new RequiredClassName(); } So, given that the project that contains RequiredClassName is itself referenced by the project that contains Service, why isn't the use of the KnownType attribute sufficient for inclusion of DLL in the output?

    Read the article

  • Windows Workflow Foundation (WF) and things I wish were more intuitive

    - by pjohnson
    I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different. Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really? WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service directly?). And you need to create a custom EventArgs class that inherits from ExternalDataEventArgs--you can't create an ExternalDataEventArgs event handler directly, even if you don't need to add any more information to the event args, despite ExternalDataEventArgs not being marked as an abstract class, nor a compiler error nor warning nor any other indication that you're doing something wrong, until you run it and find that it always times out and get to check every place mentioned here to see why. Your interface and service need an event that consumes your custom EventArgs class, and a method to fire that event. You need to call that method from somewhere. Then you get to hope that you did everything just right, or that you can step through code in the debugger before your Delay timeout expires. Yes, it's as much fun as it sounds. TransactionScopeActivity - I had the bright idea of putting one in as a placeholder, then filling in the database updates later. That caused this error: The workflow hosting environment does not have a persistence service as required by an operation on the workflow instance "[GUID]". ...which is about as helpful as "Object reference not set to an instance of an object" and even more fun to debug. Google led me to this Microsoft Forums hit, and from there I figured out it didn't like that the activity had no children. Again, a Validator on TransactionScopeActivity would have pointed this out to me at design time, rather than handing me a nearly useless error at runtime. Easily enough, I disabled the activity and that fixed it. I still see huge potential in my work where WF could make things easier and more flexible, but there are some seriously rough edges at the moment. Maybe I'm just spoiled by how much easier and more intuitive development elsewhere in the .NET Framework is.

    Read the article

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • open-sshd service withou pam support !! How can I add pam support to sshd? Ubuntu

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password

    Read the article

  • Access Control Management Tool ACM.exe

    - by kaleidoscope
    The Access Control Management Tool (Acm.exe) is a command-line tool you can use to perform management operations (CREATE, UPDATE, GET, GET ALL, and DELETE) on the AppFabric Access Control entities (scopes, issuers, token policies, and rules). Basic Syntax The command line for Acm.exe follows the basic pattern of verb-noun. For example: acm.exe <command> <resource> [-option:<option value>] This tool will automatically generate random keys, which helps ensure that they can't easily be guessed by an attacker. Note that ACM.EXE is a thin wrapper around a REST Web Service (the AC management service). That helps to remember the commands it accepts, which are the typical resource management commands for a REST service: · Get(All) · Create · Update · Delete ACM.EXE.config file can be used to configure Host, Service and the Management key for a Service Namespace. Geeta, G

    Read the article

  • State of Texas delivers Private Cloud Services powered by Oracle Technology

    - by Anand Akela
    State of Texas moved to private cloud infrastructure and delivering Infrastructure as a Service , Database as a Service and other Platform as a Service offerings to their 28 state agencies. Todd Kimbriel, Director of eGovernment Division at State of Texas attended Oracle Open World and talked with Oracle's John Foley about their private cloud services offering. Later, Todd participated in the keynote panel of Database as a Service Online Forum> along with Carl Olofson,IDC analyst , Juan Loaiza,SVP Oracle and couple of other Oracle customers. He discussed the IT challenges of  government organizations like state of Texas and the benefits of transitioning to Private cloud including database as a service .

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >