Search Results

Search found 379 results on 16 pages for 'endpoints'.

Page 4/16 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • TDD and WCF behavior

    - by Frederic Hautecoeur
    Some weeks ago I wanted to develop a WCF behavior using TDD. I have lost some time trying to use mocks. After a while i decided to just use a host and a client. I don’t like this approach but so far I haven’t found a good and fast solution to use Unit Test for testing a WCF behavior. To Implement my solution I had to : Create a Dummy Service Definition; Create the Dummy Service Implementation; Create a host; Create a client in my test; Create and Add the behavior; Dummy Service Definition This is just a simple service, composed of an Interface and a simple implementation. The structure is aimed to be easily customizable for my future needs.   Using Clauses : 1: using System.Runtime.Serialization; 2: using System.ServiceModel; 3: using System.ServiceModel.Channels; The DataContract: 1: [DataContract()] 2: public class MyMessage 3: { 4: [DataMember()] 5: public string MessageString; 6: } The request MessageContract: 1: [MessageContract()] 2: public class RequestMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myRequest; 9: } The response MessageContract: 1: [MessageContract()] 2: public class ResponseMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myResponse; 9: } The ServiceContract: 1: [ServiceContract(Name="DummyService", Namespace="http://dummyservice",SessionMode=SessionMode.Allowed )] 2: interface IDummyService 3: { 4: [OperationContract(Action="Perform", IsOneWay=false, ProtectionLevel=System.Net.Security.ProtectionLevel.None )] 5: ResponseMessage DoThis(RequestMessage request); 6: } Dummy Service Implementation 1: public class DummyService:IDummyService 2: { 3: #region IDummyService Members 4: public ResponseMessage DoThis(RequestMessage request) 5: { 6: ResponseMessage response = new ResponseMessage(); 7: response.myHeader = "Response"; 8: response.myResponse = new MyMessage(); 9: response.myResponse.MessageString = 10: string.Format("Header:<{0}> and Request was <{1}>", 11: request.myHeader, request.myRequest.MessageString); 12: return response; 13: } 14: #endregion 15: } Host Creation The most simple host implementation using a Named Pipe binding. The GetBinding method will create a binding for the host and can be used to create the same binding for the client. 1: public static class TestHost 2: { 3: 4: internal static string hostUri = "net.pipe://localhost/dummy"; 5:  6: // Create Host method. 7: internal static ServiceHost CreateHost() 8: { 9: ServiceHost host = new ServiceHost(typeof(DummyService)); 10:  11: // Creating Endpoint 12: Uri namedPipeAddress = new Uri(hostUri); 13: host.AddServiceEndpoint(typeof(IDummyService), GetBinding(), namedPipeAddress); 14:  15: return host; 16: } 17:  18: // Binding Creation method. 19: internal static Binding GetBinding() 20: { 21: NamedPipeTransportBindingElement namedPipeTransport = new NamedPipeTransportBindingElement(); 22: TextMessageEncodingBindingElement textEncoding = new TextMessageEncodingBindingElement(); 23:  24: return new CustomBinding(textEncoding, namedPipeTransport); 25: } 26:  27: // Close Method. 28: internal static void Close(ServiceHost host) 29: { 30: if (null != host) 31: { 32: host.Close(); 33: host = null; 34: } 35: } 36: } Checking the service A simple test tool check the plumbing. 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Open(); 7:  8: using (ChannelFactory<IDummyService> channel = 9: new ChannelFactory<IDummyService>(TestHost.GetBinding() 10: , new EndpointAddress(TestHost.hostUri))) 11: { 12: IDummyService svc = channel.CreateChannel(); 13: try 14: { 15: RequestMessage request = new RequestMessage(); 16: request.myHeader = Guid.NewGuid().ToString(); 17: request.myRequest = new MyMessage(); 18: request.myRequest.MessageString = "I want some beer."; 19:  20: ResponseMessage response = svc.DoThis(request); 21: } 22: catch (Exception ex) 23: { 24: Assert.Fail(ex.Message); 25: } 26: } 27: host.Close(); 28: } 29: } Running the service should show that the client and the host are running fine. So far so good. Adding the Behavior Add a reference to the Behavior project and add the using entry in the test class. We just need to add the behavior to the service host : 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Description.Behaviors.Add(new MyBehavior()); 7: host.Open();¨ 8: …  If you set a breakpoint in your behavior and run the test in debug mode, you will hit the breakpoint. In this case I used a ServiceBehavior. To add an Endpoint behavior you have to add it to the endpoints. 1: host.Description.Endpoints[0].Behaviors.Add(new MyEndpointBehavior()) To add a contract or an operation behavior a custom attribute should work on the service contract definition. I haven’t tried that yet.   All the code provided in this blog and in the following files are for sample use. Improvements I don’t like to instantiate a client and a service to test my behaviors. But so far I have' not found an easy way to do it. Today I am passing a type of endpoint to the host creator and it creates the right binding type. This allows me to easily switch between bindings at will. I have used the same approach to test Mex Endpoints, another post should come later for this. Enjoy !

    Read the article

  • How to support both HTTP and HTTPS channels in Flex/BlazeDS?

    - by digitalsanctum
    I've been trying to find the right configuration for supporting both http/s requests in a Flex app. I've read all the docs and they allude to doing something like the following: <default-channels> <channel ref="my-secure-amf"> <serialization> <log-property-errors>true</log-property-errors> </serialization> </channel> <channel ref="my-amf"> <serialization> <log-property-errors>true</log-property-errors> </serialization> </channel> This works great when hitting the app via https but get intermittent communication failures when hitting the same app via http. Here's an abbreviated services-config.xml: <channel-definition id="my-amf" class="mx.messaging.channels.AMFChannel"> <endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amf" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <!-- HTTPS requests don't work on IE when pragma "no-cache" headers are set so you need to set the add-no-cache-headers property to false --> <add-no-cache-headers>false</add-no-cache-headers> <!-- Use to limit the client channel's connect attempt to the specified time interval. --> <connect-timeout-seconds>10</connect-timeout-seconds> </properties> </channel-definition> <channel-definition id="my-secure-amf" class="mx.messaging.channels.SecureAMFChannel"> <!--<endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.SecureAMFEndpoint"/>--> <endpoint url="https://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure" class="flex.messaging.endpoints.AMFEndpoint"/> <properties> <add-no-cache-headers>false</add-no-cache-headers> <connect-timeout-seconds>10</connect-timeout-seconds> </properties> </channel-definition> I'm running with Tomcat 5.5.17 and Java 5. The BlazeDS docs say this is the best practice. Is there a better way? With this config, there seems to be 2-3 retries associated with each channel defined in the default-channels element so it always takes ~20s before the my-amf channel connects via a http request. Is there a way to override the 2-3 retries to say, 1 retry for each channel? Thanks in advance for answers.

    Read the article

  • ISA Web Farm and WCF service hosted in a Windows Service with basicHttpBinding

    - by Ryan Pedersen
    I have created a WCF service that needs to be hosted in a Window Service because it is participating in a P2P mesh (NetPeerTcpBinding). When I tried to host the WCF Service with NetPeerTcpBinding endpoints in the IIS Service container the service wouldn't run because it turns out that the P2P binding doesn't work in IIS. I have exposed a HTTP endpoint from the WCF service hosted in a Windows Service container and I want to know if there is a way to create an ISA Web Farm that will route traffic to http endpoints on two machines each running the same WCF service in a Windows Service container.

    Read the article

  • How to configure MessageEndpointMapping by namespace in NServiceBus

    - by SteveBering
    I am trying to configure my message endpoint mapping in my NServiceBus configuration by sending messages from different namespaces to different endpoints. As such, I have configured the following in my web.config: However, when my application starts, I receive the following exception: Spring.Objects.PropertyAccessExceptionsException: PropertyAccessExceptionsException (1 errors); nested PropertyAccessExceptions are: [Spring.Core.TypeMismatchException: Cannot convert property value of type [System.Collections.Hashtable] to required type [System.Collections.IDictionary] for property 'MessageOwners'., Inner Exception: System.ArgumentException: Problem loading message assembly: Company.Messages.Payments --- System.IO.FileNotFoundException: Could not load file or assembly 'Company.Messages.Payments' or one of its dependencies. The system cannot find the file specified. File name: 'Company.Messages.Payments' What I find interesting is that it seems to have found Company.Messages.Accounts but failed on the second configured line. I thought that maybe it didn't like have them all go to the same endpoint, but changing this configuration to have them go different endpoints didn't change the error message I received. What am I doing wrong? Is it not possible to segment messages by namespace (all I have seen is by type and by assembly)? Thanks, Steve

    Read the article

  • WCF 4: Fileless Activation Fails On XP (IIS 5) that has SSL port enabled.

    - by Richard Collette
    I have a service being hosted in IIS on XP via fileless activation. The service starts fine when there is no SSL port enabled for IIS but when the SSL port is enabled, I get the error message: System.ServiceModel.ServiceActivationException: The service '/SkillsPrototype.Web/services/Linkage.svc' cannot be activated due to an exception during compilation. The exception message is: A binding instance has already been associated to listen URI 'http://rcollet.hsb-corp.hsb.com/SkillsPrototype.Web/Services/Linkage.svc'. If two endpoints want to share the same ListenUri, they must also share the same binding object instance. The two conflicting endpoints were either specified in AddServiceEndpoint() calls, in a config file, or a combination of AddServiceEndpoint() and config. . ---> System.InvalidOperationException: A binding instance has already been associated to listen URI 'http://rcollet.hsb-corp.hsb.com/SkillsPrototype.Web/Services/Linkage.svc'. If two endpoints want to share the same ListenUri, they must also share the same binding object instance. The two conflicting endpoints were either specified in AddServiceEndpoint() calls, in a config file, or a combination of AddServiceEndpoint() and config. My service model configuration is <system.serviceModel> <diagnostics wmiProviderEnabled="true"> <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="3000"/> </diagnostics> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <webHttpBinding> <binding> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </webHttpBinding> </bindings> <protocolMapping> </protocolMapping> <services> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> <serviceActivations> <clear/> <add factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" service="SkillsPrototype.ServiceModel.Linkage" relativeAddress="~/Services/Linkage.svc"/> </serviceActivations> </serviceHostingEnvironment> </system.serviceModel> When you look in the svclog file, there two base addresses that are returned when SSL is enabled, one for http and one for https. I suspect that this is part of the issue but I am not sure how to resolve it. <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent"> <System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system"> <EventID>524333</EventID> <Type>3</Type> <SubType Name="Information">0</SubType> <Level>8</Level> <TimeCreated SystemTime="2010-06-16T17:40:55.8168605Z" /> <Source Name="System.ServiceModel" /> <Correlation ActivityID="{95927f9a-fa90-46f4-af8b-721322a87aaa}" /> <Execution ProcessName="aspnet_wp" ProcessID="1888" ThreadID="5" /> <Channel/> <Computer>RCOLLET</Computer> </System> <ApplicationData> <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Information"> <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.ServiceHostBaseAddresses.aspx</TraceIdentifier> <Description>ServiceHost base addresses.</Description> <AppDomain>/LM/w3svc/1/ROOT/SkillsPrototype.Web-1-129211836532542949</AppDomain> <Source>System.ServiceModel.WebScriptServiceHost/49153359</Source> <ExtendedData xmlns="http://schemas.microsoft.com/2006/08/ServiceModel/CollectionTraceRecord"> <BaseAddresses> <Address>http://rcollet.hsb-corp.hsb.com/SkillsPrototype.Web/Services/Linkage.svc</Address> <Address>https://rcollet.hsb-corp.hsb.com/SkillsPrototype.Web/Services/Linkage.svc</Address> </BaseAddresses> </ExtendedData> </TraceRecord> </DataItem> </TraceData> </ApplicationData> </E2ETraceEvent> I can't post the full service log due to character limits on the post.

    Read the article

  • Compressing as GZip WCF requests (SOAP and REST)

    - by Joannes Vermorel
    I have a .NET 3.5 web app hosted on Windows Azure that exposes several WCF endpoints (both SOAP and REST). The endpoints typically receive 100x more data than they serve (lot of data is upload, much fewer is downloaded). Hence, I am willing to take advantage from HTTP GZip compression but not from the server viewpoint, but rather from the client viewpoint, sending compressed requests (returning compressed responses would be fine, but won't bring much gain anyway). Here is the little C# snippet used on the client side to activate WCF: var binding = new BasicHttpBinding(); var address = new EndpointAddress(endPoint); _factory = new ChannelFactory<IMyApi>(binding, address); _channel = _factory.CreateChannel(); Any idea how to adjust the behavior so that compressed HTTP requests can be made?

    Read the article

  • How to support both DataContractSerializer and XMLSerializer for the same contract on the same host?

    - by Sly
    In our production environment, our WCF services are serialized with the XMLSerializer. To do so our service interfaces have the [XMLSerializerFormat] attribute. Now, we need to change to DataContractSerializer but we must stay compatible with our existing clients. Therefore, we have to expose each service with both serializers. We have one constraint: we don't want to redefine each contract interface twice, we have 50 services contract interfaces and we don't want to have IIncidentServiceXml IIncidentServiceDCS IEmployeeServiceXml IEmployeeServiceDCS IContractServiceXml IContractServiceDCS How can we do that? This is a description of what we have tried so far but I'm willing to try completely different approaches: We tried to create all the endpoints by code in our own ServiceHostFactory class. Basically we create each endpoint twice. The problem is that at runtime, WCF complains that the service has two endpoints with the same contact name but with different ContractDescription instances. The message says we should use different contract names or reuse the same ContractDescription instance.

    Read the article

  • Consuming WCF REST service in multiple ways (.Net, plain XML)

    - by Jan Jongboom
    I have become quite frustrated of WCF as I just want to use this simple scenario: Provide a webservice using REST, with a UriTemplate like /method/{param1}/{param2}/ and a 3th parameter that is sent to the service as XML as POST data. Use just plain XML, no SOAP overhead. Be able to generate a proxy in Visual Studio so a .Net using client can easily use the service (don't care about SOAP overhead here). I can create 1. and 2. but no way I can use 3. I tried adding both webHttpBinding and basicHttpBinding endpoints in my services config; I fooled around with the <services/> tag, but I just can't get this working. What am I missing here?! N.B. I checked out this article: http://stackoverflow.com/questions/186631/rest-soap-endpoints-for-a-wcf-service but nothing what is described there seems to work here?!

    Read the article

  • Internet Protocol Suite: Transition Control Protocol (TCP) vs. User Datagram Protocol (UDP)

    How do we communicate over the Internet?  How is data transferred from one machine to another? These types of act ivies can only be done by using one of two Internet protocols currently. The collection of Internet Protocol consists of the Transition Control Protocol (TCP) and the User Datagram Protocol (UDP).  Both protocols are used to send data between two network end points, however they both have very distinct ways of transporting data from one endpoint to another. If transmission speed and reliability is the primary concern when trying to transfer data between two network endpoints then TCP is the proper choice. When a device attempts to send data to another endpoint using TCP it creates a direct connection between both devices until the transmission has completed. The direct connection between both devices ensures the reliability of the transmission due to the fact that no intermediate devices are needed to transfer the data. Due to the fact that both devices have to continuously poll the connection until transmission has completed increases the resources needed to perform the transmission. An example of this type of direct communication can be seen when a teacher tells a students to do their homework. The teacher is talking directly to the students in order to communicate that the homework needs to be done.  Students can then ask questions about the assignment to ensure that they have received the proper instructions for the assignment. UDP is a less resource intensive approach to sending data between to network endpoints. When a device uses UDP to send data across a network, the data is broken up and repackaged with the destination address. The sending device then releases the data packages to the network, but cannot ensure when or if the receiving device will actually get the data.  The sending device depends on other devices on the network to forward the data packages to the destination devices in order to complete the transmission. As you can tell this type of transmission is less resource intensive because not connection polling is needed,  but should not be used for transmitting data with speed or reliability requirements. This is due to the fact that the sending device can not ensure that the transmission is received.  An example of this type of communication can be seen when a teacher tells a student that they would like to speak with their parents. The teacher is relying on the student to complete the transmission to the parents, and the teacher has no guarantee that the student will actually inform the parents about the request. Both TCP and UPD are invaluable when attempting to send data across a network, but depending on the situation one protocol may be better than the other. Before deciding on which protocol to use an evaluation for transmission speed, reliability, latency, and overhead must be completed in order to define the best protocol for the situation.  

    Read the article

  • Laser Beam End Points Problems

    - by user36159
    I am building a game in XNA that features colored laser beams in 3D space. The beams are defined as: Segment start position Segment end position Line width For rendering, I am using 3 quads: Start point billboard End point billboard Middle section quad whose forward vector is the slope of the line and whose normal points to the camera The problem is that using additive blending, the end points and middle section overlap, which looks quite jarring. However, I need the endpoints in case the laser is pointing towards the camera! See the blue laser in particular:

    Read the article

  • Laser Beam End Points Problems (XNA)

    - by user36159
    I am building a game in XNA that features colored laser beams in 3D space. The beams are defined as: Segment start position Segment end position Line width For rendering, I am using 3 quads: Start point billboard End point billboard Middle section quad whose forward vector is the slope of the line and whose normal points to the camera The problem is that using additive blending, the end points and middle section overlap, which looks quite jarring. However, I need the endpoints in case the laser is pointing towards the camera! See the blue laser in particular:

    Read the article

  • Service Stack

    - by csharp-source.net
    ServiceStack allows you to build re-usable SOA-style web services with plain POCO DataContract classes. The same DTO's can be shared with a .NET client application eliminating the need for any generated code. With no configuration required, web services created are immediately discoverable and callable via the following supported endpoints: - REST and XML - REST and JSON - SOAP 1.1 / 1.2 Services can run on both Mono and the .NET Framework and be hosted in either a ASP.NET Web Application, a Windows Service or Console application.

    Read the article

  • How to: Add an HTTPS Endpoint to a Windows Azure Cloud Service

    - by kaleidoscope
    Technorati Tags: Ritesh,Windows Azure,Endpoints,https The process to add an HTTPS endpoint is a 3 step process. Configure the endpoint Upload the certificate to the Cloud Configure the SSL certificate (and then point the endpoint to that certificate) Reference – http://blogs.msdn.com/jnak/archive/2009/12/01/how-to-add-an-https-endpoint-to-a-windows-azure-cloud-service.aspx - Ritesh, D

    Read the article

  • Application to automate Windows software installation in a test lab

    - by Marc
    I have several test environments (hyper-V) which contain a variety of windows servers. Each machine needs periodically rolling back to a given snapshot and then re-installing with the latest version of our software to test. The software installs are quite complex MSI's with a fair few option screens. I know that the installs can be driven from the command line, passing in parameters to override the wizard options. At the simplest level I suppose I could just write a batch file to kick off each install with the required parameters, however the values that are passed in do need to change from time to time (and environment to environment) so a tool with a config file and simple GUI seems like a better idea. I think what makes it slightly more painful is the multiple environments. For example one environment might contain 4 servers and need a config file with all the server names, service endpoints etc. Another environment might be a 1-box install with all names and endpoints set to localhost. So, ideally I want to be able to store different setup configurations and use them to run all the required installers with the relevant settings against the relevant machines. Before I go off to write the thing, does anyone know of an existing, simple, free tool that will let me achieve this?

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • Using Windows as a gateway to the internet

    - by James Wright
    My customer currently blocks outbound RDP and SSH, which means that none of their employees can get access to external Windows and Linux boxes (at the console level). However, a need has recently arisen to give access to an assortment of RDP and SSH endpoints scattered throughout the internet. The endpoint IP addresses are a moving target, and an access list exists to define what those IP addresses are. So now my customer wants to have a single Windows Server that they control as the sole outbound point for RDP/SSH to the internet. Consider it a jump box to the internet. If one of our admins have an access to this Windows box then they can log on, and from there bounce around to RDP/SSH endpoints on the internet. Is a standard Windows 2008 box going to work as a jump box? For example, I seem to recall that Win2k8 limits the number of users that can log on simultaneously, which means that the jump box may not be accessible if lots of users are on it. Advice as to how to make this work..?

    Read the article

  • How to design a high-level application protocol for metadata syncing between devices and server?

    - by Jaanus
    I am looking for guidance on how to best think about designing a high-level application protocol to sync metadata between end-user devices and a server. My goal: the user can interact with the application data on any device, or on the web. The purpose of this protocol is to communicate changes made on one endpoint to other endpoints through the server, and ensure all devices maintain a consistent picture of the application data. If user makes changes on one device or on the web, the protocol will push data to the central repository, from where other devices can pull it. Some other design thoughts: I call it "metadata syncing" because the payloads will be quite small, in the form of object IDs and small metadata about those ID-s. When client endpoints retrieve new metadata over this protocol, they will fetch actual object data from an external source based on this metadata. Fetching the "real" object data is out of scope, I'm only talking about metadata syncing here. Using HTTP for transport and JSON for payload container. The question is basically about how to best design the JSON payload schema. I want this to be easy to implement and maintain on the web and across desktop and mobile devices. The best approach feels to be simple timer- or event-based HTTP request/response without any persistent channels. Also, you should not have a PhD to read it, and I want my spec to fit on 2 pages, not 200. Authentication and security are out of scope for this question: assume that the requests are secure and authenticated. The goal is eventual consistency of data on devices, it is not entirely realtime. For example, user can make changes on one device while being offline. When going online again, user would perform "sync" operation to push local changes and retrieve remote changes. Having said that, the protocol should support both of these modes of operation: Starting from scratch on a device, should be able to pull the whole metadata picture "sync as you go". When looking at the data on two devices side by side and making changes, should be easy to push those changes as short individual messages which the other device can receive near-realtime (subject to when it decides to contact server for sync). As a concrete example, you can think of Dropbox (it is not what I'm working on, but it helps to understand the model): on a range of devices, the user can manage a files and folders—move them around, create new ones, remove old ones etc. And in my context the "metadata" would be the file and folder structure, but not the actual file contents. And metadata fields would be something like file/folder name and time of modification (all devices should see the same time of modification). Another example is IMAP. I have not read the protocol, but my goals (minus actual message bodies) are the same. Feels like there are two grand approaches how this is done: transactional messages. Each change in the system is expressed as delta and endpoints communicate with those deltas. Example: DVCS changesets. REST: communicating the object graph as a whole or in part, without worrying so much about the individual atomic changes. What I would like in the answers: Is there anything important I left out above? Constraints, goals? What is some good background reading on this? (I realize this is what many computer science courses talk about at great length and detail... I am hoping to short-circuit it by looking at some crash course or nuggets.) What are some good examples of such protocols that I could model after, or even use out of box? (I mention Dropbox and IMAP above... I should probably read the IMAP RFC.)

    Read the article

  • Silverlight Cream for May 13, 2010 -- #861

    - by Dave Campbell
    In this Issue: Sigurd Snørteland, Jeff Prosise, DaveDev, Joe Zhou, Chris Eargle, John Papa(-2-, -3-), and David Anson(-2-). Shoutouts: In with the links I've listed below, Sigurd Snørteland also sent a link to this app he's working on which is actually pretty cool to see: ZuneLight. The code is not yet available. He also has a no-code demo of a Silverlight Media Center Pieter Voloshyn, Luiz Thadeu, and Jhun Iti have a very nice Silverlight image editor up: Thumba From SilverlightCream.com: WP7 - Silverlight on mobile Sigurd Snørteland submitted some links for me that have been translated to English from his blog. I hope the pages come out good because he's got a lot of good stuff on there. This one has a link to a presentation he did, and 4 projects you can load up in the emulator that he's converted to the phone: weather, worldclock, coverflow, and solitaire ... pretty cool... thanks for the links Sigurd! Understanding Page Orientation in Silverlight for Windows Phone Jeff Prosise has a really nice post up on page orientation in WP7 ... what it means to your app, how to detect it, and example code for what to do then... also love a quote by Jeff: "Silverlight for Windows Phone is the hottest thing since color TV" Why you should check out Expression Blend Behaviors when using Silverlight DaveDev has a post up describing Behaviors and why we should use them, plus tons of external links to resources, blogs, videos... all good stuff... Fiddler inspector for WCF Silverlight Polling Duplex and WCF RIA Joe Zhou announces and provides a link to a new Fiddler inspector that understands the framing in Polling Duplex and also raw binary xml and binary SOAP. Windows Phone Controls v0.7 Chris Eargle reports the release of Version 0.7 of the Windows Phone Controls project on CodePlex ... this includes a Pivot Control and a Panorama Control... both very nicely done. Binding to Silverlight ComboBox and Using SelectedValue, SelectedValuePath and DisplayMemberPath John Papa responds to a user question and put up a nice post about binding to a ComboBox and then go from the selected item to some other property ... code included No More Boxes! Exploring the PathListBox (Silverlight TV #25) Silverlight TV 25 went up on Tuesday ... thought it was going to be Thursday?? anyway ... John Papa and Adam Kinney are discussing the PathListBox and looking at some cool demos thereof. Exposing SOAP, OData, and JSON Endpoints for RIA Services (Silverlight TV 26) Since today IS Thursday, we have a new Silverlight TV, number 26, and John Papa is chatting with Deepesh Mohnani of the WCF RIA Services team about exposing all sorts of endpoints... should be something in there for everybody :) Workaround for a Silverlight data binding bug affecting various scenarios - including DataGrid+ContextMenu David Anson details the rabbit-trail he and others on the team followed in response to a problem reported via Twitter where the binding on a DataGrid seemed off by a row(!) ... weird but true, validated, and SL3/4 are bug-for-bug compatible with this too! ... But David wouldn't leave us there.. he also has a workaround. Sharing the code for a simple Silverlight 4 REST-based cloud-oriented file management app for Azure and S3 David Anson had an opportunity to build an app he's wanted to build for a while and shares it with us: Blobstore -- a small, lightweight Silverlight 4 application that acts as a basic front-end for the Windows Azure Simple Data Storage and the Amazon Simple Storage Service (S3) -- and remember I said he shared the source :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Tyrus 1.8

    - by Pavel Bucek
    Another version of Tyrus, the reference implementation of JSR 356 – Java API for WebSocket is out! Complete list of fixes and features is below, but let me describe some of the new features in more detail. All information presented here is also available in Tyrusdocumentation. What’s new? First to mention is that JSR 356 Maintenance review Ballot is over and the change proposed for 1.1 release was accepted. More details about changes in the API can be found in this article. Important part is that Tyrus 1.8 implements this API, meaning you can use Lambda expressions and some features of Nashorn without the need for any workarounds. Almost all other features are related to client side support, which was significantly improved in this release. Firstly – I have to admit, that Tyrus client contained security issue – SSL Hostname verification was not performed when connecting to “wss” endpoints. This was fixed as part of TYRUS-339 and resulted in some changes in the client configuration API. Now you can control whether HostnameVerification should be performed (SslEngineConfigurator#setHostnameVerificationEnabled(boolean)) or even set your own HostnameVerifier (please use carefully): #setHostnameVerifier(…). Detailed description can be found in Host verification chapter. Another related enhancement is support for Http Basic and Digest authentication schemes. Tyrus client now enables users to provide credentials and underlying implementation will take care of everything else. Our implementation is strictly non pre-emptive, so the login information is sent always as a response to 401 Http Status Code. If the Basic and Digest are not good enough and there is a need to use some custom scheme or something which is not yet supported in Tyrus, custom Authenticator can be registered and the authentication part of the handshake process will be handled by it. Please seeClient HTTP Authentication chapter in the user guide for more details. There are other features, like fine-grain threadpool configuration for JDK client container, build-in Http redirect support and some reshuffling related to unifying the location of client configuration classes and properties definition – every property should be now part of ClientProperties class. All new features are described in the user guide – in chapterTyrus proprietary configuration. Update – Tyrus 1.8.1 There was another slightly late reported issue related to running in environments with SecurityManager enabled, so this version fixes that. Another noteworthy fixes are TYRUS-355 and TYRUS-361; the first one is about incorrect thread factory used for shared container timeout, which resulted in JVM waiting for that thread and not exiting as it should. The other issue enables relative URIs in Location header when using redirect feature. Links Tyrus homepage mailing list JIRA Complete list of changes: Bug [TYRUS-333] – Multiple endpoints on one client [TYRUS-334] – When connection is closed by a peer, periodic heartbeat pong is not stopped [TYRUS-336] – ReaderBuffer.getNextChars() keeps blocking a server thread after client has closed the session [TYRUS-338] – JDK client SSL filter needs better synchronization during handshake phase [TYRUS-339] – SSL hostname verification is missing [TYRUS-340] – Test PathParamTest are not stable with JDK client [TYRUS-341] – A control frame inside a stream of continuation frames is treated as the part of the stream [TYRUS-343] – ControlFrameInDataStreamTest does not pass on GF [TYRUS-345] – NPE is thrown, when shared container timeout property in JDK client is not set [TYRUS-346] – IllegalStateException is thrown, when using proxy in JDK client [TYRUS-347] – Introduce better synchronization in JDK client thread pool [TYRUS-348] – When a client and server close connection simultaneously, JDK client throws NPE [TYRUS-356] – Tyrus cannot determine the connection port for a wss URL [TYRUS-357] – Exception thrown in MessageHandler#OnMessage is not caught in @OnError method [TYRUS-359] – Client based on Java 7 Asynchronous IO makes application unexitable Improvement [TYRUS-328] – JDK 1.7 AIO Client container – threads – (setting threadpool, limits, …) [TYRUS-332] – Consolidate shared client properties into one file. [TYRUS-337] – Create an SSL version of Basic Servlet test New Feature [TYRUS-228] – Add client support for HTTP Basic/Digest Task [TYRUS-330] – create/run tests/servlet/basic via wss [TYRUS-335] – [clustering] – introduce RemoteSession and expose them via separate method (not include remote sessions in the getOpenSessions()) [TYRUS-344] – Introduce Client support for HTTP Redirect

    Read the article

  • How to access WCF RIA service from Windows Service?

    - by Duncan Bayne
    I have a functioning SL4 application; inside the ClientBin directory I have an .svc file that describes my service: <% @ServiceHost Service="MyApp.Services.MyServiceFactory="System.ServiceModel.DomainServices.Hosting.DomainServiceHostFactory" %> When I browse to http://localhost:52878/ClientBin/MyApp-Services-MyService.svc I see the following: "You have created a service. To test this service, you will need to create a client and use it to call the service. You can do this using the svcutil.exe tool from the command line with the following syntax: svcutil.exe http://localhost:52878/ClientBin/MyApp-Services-MyService.svc?wsdl" I want to access that service from a Windows Service application. My understanding is that I need to enable SOAP end-points in order to make this happen. So, I add the following to my web.config file: <domainServices> <endpoints> <add name="soap" type="System.ServiceModel.DomainServices.Hosting.SoapXmlEndpointFactory, System.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </endpoints> </domainServices> Firstly, Intellisense complains about the presence of the tag, saying "The element system.ServiceModel has invalid child element domainServices." Secondly, the aforementioned Silverlight application stops working, presumably because this change breaks the underlying web services. Thirdly, it appears that the System.ServiceModel.DomainServices.Hosting assembly doesn't actually contain the SoapXmlEndpointFactory type; if I try to browse to the service after adding the above to web.config I see: "Could not load type 'System.ServiceModel.DomainServices.Hosting.SoapXmlEndpointFactory' from assembly 'System.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'." If I inspect the assembly using Reflector, I see that it contains the DomainServiceEndpointFactory and PoxBinaryEndpointFactory types, but no SoapXmlEndpointFactory. Could someone please let me know what I'm doing wrong? I can't believe that it should be this hard to simply consume a WCF RIA service in something other than a Silverlight application! Yours, Duncan Bayne

    Read the article

  • How to achieve "Blendability" when using DataServiceCollection in my ViewModel

    - by andlju
    I'm looking at using oData endpoints in my Silverlight client. Naturally, I'm doing MVVM and I want the project to be nice and "Blendable" (i.e. I must be able to cleanly use static data instead of the oData endpoints when in design mode.) Now to the problem. I'd like to use the DataServiceCollection in my ViewModels, since it allows for nice bindable collections without having to worry too much with BeginExecute/EndExecute etc. Now, let's look at some code. My Model interface looks like this: public interface ITasksModel { IQueryable<Task> Tasks { get; } } The oData endpoint implementation of that interface: public class TasksModel : ITasksModel { Uri svcUri = new Uri("http://localhost:2404/Services/TasksDataService.svc"); TaskModelContainer _container; public TasksModel() { _container = new TaskModelContainer(svcUri); } public IQueryable<Task> Tasks { get { return _container.TaskSet; } } } And the "Blendable" design-time implementation: public class DesignModeTasksModel : ITasksModel { private List<Task> _taskCollection = new List<Task>(); public DesignModeTasksModel() { _taskCollection.Add(new Task() { Id = 1, Title = "Task 1" }); _taskCollection.Add(new Task() { Id = 2, Title = "Task 2" }); _taskCollection.Add(new Task() { Id = 3, Title = "Task 3" }); } public IQueryable<Task> Tasks { get { return _taskCollection.AsQueryable(); } } } However, when I try to use this last one in my ViewModel constructor: public TaskListViewModel(ITasksModel tasksModel) { _tasksModel = tasksModel; _tasks = new DataServiceCollection<Task>(); _tasks.LoadAsync(_tasksModel.Tasks); } I get an exception: Only a typed DataServiceQuery object can be supplied when calling the LoadAsync method on DataServiceCollection. First of all, if this is the case, why not make the input parameter of LoadAsync be typed as DataServiceQuery? Second, what is the "proper" way of doing what I'm trying to accomplish?

    Read the article

  • Run WCF ServiceHost with multiple contracts

    - by Sam
    Running a ServiceHost with a single contract is working fine like this: servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(), "net.tcp://127.0.0.1:800/MyApp/MyService1"); servicehost.Open(); Now I'd like to add a second (3rd, 4th, ...) contract. My first guess would be to just add more endpoints like this: servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(), "net.tcp://127.0.0.1:800/MyApp/MyService1"); servicehost.AddServiceEndpoint(typeof(IMyService2), new NetTcpBinding(), "net.tcp://127.0.0.1:800/MyApp/MyService2"); servicehost.Open(); But of course this does not work, since in the creation of ServiceHost I can either pass MyService1 as parameter or MyService2 - so I can add a lot of endpoints to my service, but all have to use the same contract, since I only can provide one implementation? I got the feeling I'm missing the point, here. Sure there must be some way to provide an implementation for every endpoint-contract I add, or not?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >