Search Results

Search found 69987 results on 2800 pages for 'wcf data services'.

Page 18/2800 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • WCF Service and Properties

    - by Karnalta
    Hi all, Here is my question, I have a solution with 4 projects in it for a WCF Service : DLL Library : Service Interface. DLL Library : Service Code. Form Application : Service hosting application. Form Application : Service client application. I'd like to have certain properties of the service accessible for the hosting application but not for the client one. If I declare a property in the client interface they will both have access to it. In fact, my service manage user identity login and keep a list of all user currently logged in. I'd like to be able to show this list in the Hosting application, like a debugging tool. But I don't want the service client to be able to access to this list. How can I do ? Thank in advance.

    Read the article

  • WCF Host as windows service faults

    - by pdiddy
    I have this WCF service running as a window service. I have in my code that everytime it faults it will restart the service. Now I'm having the issue where the host faults, it tries to restarts, then faults again, but at some point it just stop the service. Wondering why it stop the service? Is this something handled by the OS that it detects the service has faulted a number of time within a certain time it will just stop the service because it faulted too many time ?

    Read the article

  • WCF Windows service permissions problem

    - by Elad
    I have created a WCF service and hosted it using Windows Services host. To install the project I created an installation project (as described here). In the tutorial, it says to define in the ProjectInstaller.cs the serviceProcessInstaller1 Account property to be Network Service. When using this setting the service did not started on the server. When I tried to start the process manually, it immediately return to stopped state. After when I changed the Account to LocalSystem the service works properly. My questions are: Any ideas why it won't work with Network Service account? What are the security implications of using a server with LocalSystem account? This server is used locally in the intranet as a reporting server for other servers.

    Read the article

  • Creating a WCF Restful service, concurrency issues

    - by pmillio
    Hi i am in the process of creating a restful service with WCF, the service is likely to be consumed by at least 500 people at any given time. What settings would i need to set in order to deal with this. Please give me any points and tips, thanks. Here is a sample of what i have so far; [ServiceBehavior(IncludeExceptionDetailInFaults = true, InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] And this is an example of a method being called; public UsersAPI getUserInfo(string UserID) { UsersAPI users = new UsersAPI(int.Parse(UserID)); return users; } [OperationContract] [WebGet(BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Json, UriTemplate = "User/{UserID}")] [WebHelp(Comment = "This returns a users info.")] UsersAPI getUserInfo(string UserID);

    Read the article

  • WCF methods sharing a dictionary

    - by YeomansLeo
    I'm creating a WCF Service Library and I have a question regarding thread-safety consuming a method inside this library, here is the full implementation that I have until now. namespace WCFConfiguration { [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall, ConcurrencyMode = ConcurrencyMode.Single)] public class ConfigurationService : IConfigurationService { ConcurrentDictionary<Tuple<string,string>, string> configurationDictionary = new ConcurrentDictionary<Tuple<string,string>, string>(); public void Configuration(IEnumerable<Configuration> configurationSet) { Tuple<string, string> lookupStrings; foreach (var config in configurationSet) { lookupStrings = new Tuple<string, string>(config.BoxType, config.Size); configurationDictionary.TryAdd(lookupStrings, config.RowNumber); } } public void ScanReceived(string boxType, string size, string packerId = null) { } } } Imagine that I have a 10 values in my configurationDictionary and many people want to query this dictionary consuming ScanReceived method, are those 10 values be shared for each of the clients that request ScanReceived? Do I need to change my ServiceBehavior? The Configuration method is only consumed by one person by the way.

    Read the article

  • Big Data Appliance X4-2 Release Announcement

    - by Jean-Pierre Dijcks
    Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here. Software Focus The focus for this 3rd generation of Big Data Appliance is: Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE). Lower TCO then DIY Hadoop Systems Simplified Operations while providing an open platform for the organization Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box Hardware Update A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware: Big Data Appliance X3-2 Big Data Appliance X4-2 CPU 2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz) 2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz) Memory 64GB 64GB Disk 12 x 3TB High Capacity SAS 12 x 4TB High Capacity SAS InfiniBand 40Gb/sec 40Gb/sec Ethernet 10Gb/sec 10Gb/sec For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes. Software Details More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software): Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack Linux Oracle Linux 5.8 with UEK 1 Oracle Linux 6.4 with UEK 2 JDK JDK 6 JDK 7 Cloudera CDH CDH 4.3 CDH 4.4 Cloudera Manager CM 4.6 CM 4.7 And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers. For more information: Big Data Appliance Data Sheet Big Data Connectors Data Sheet Oracle NoSQL Database Data Sheet (CE | EE) Oracle Advanced Analytics Data Sheet

    Read the article

  • Review: Data Modeling 101

    I just recently read “Data Modeling 101”by Scott W. Ambler where he gave an overview of fundamental data modeling skills. I think this article was excellent for anyone who was just starting to learn or refresh their skills in regards to the modeling of data.  Scott defines data modeling as the act of exploring data oriented structures.  He goes on to explain about how data models are actually used by defining three different types of models. Types of Data Models Conceptual Data Model  Logical Data Model (LDMs) Physical Data Model(PDMs) He further expands on modeling by exploring common data modeling notations because there are no industry standards for the practice of data modeling. Scott then defines how to actually model data by expanding on entities, attributes, identities, and relationships which are the basic building blocks of data models. In addition he discusses the value of normalization for redundancy and demoralization for performance. Finally, he discuss ways in which Developers and DBAs can become better data modelers through the use of practice, and seeking guidance from more experienced data modelers.

    Read the article

  • HTML5 data-* (custom data attribute)

    - by Renso
    Goal: Store custom data with the data attribute on any DOM element and retrieve it. Previously under HTML4 we used to use classes to store custom data, something to the affect of <input class="account void limit-5000 over-4999" /> and then have to parse the data out of the class In a book published by Peter-Paul Koch in 2007, ppk on JavaScript, he explains why and how to use custom attributes to make data more accessible to JavaScript, using name-value pairs. Accessing a custom attribute account-limit=5000 is much easier and more intuitive than trying to parse it out of a class, Plus, what if the class name for example "color-5" has a representative class definition in a CSS stylesheet that hides it away or worse some JavaScript plugin that automatically adds 5000 to it, or something crazy like that, just because it is a valid class name. As you can see there are quite a few reasons why using classes is a bad design and why it was important to define custom data attributes in HTML5. Syntax: You define the data attribute by simply prefixing any data item you want to store with any HTML element with "data-". For example to store our customers account data with a hidden input element: <input type="hidden" data-account="void" data-limit=5000 data-over=4999  /> How to access the data: account  -     element.dataset.account limit    -     element.dataset.limit You can also access it by using the more traditional get/setAttribute method or if using jQuery $('#element').attr('data-account','void') Browser support: All except for IE. There is an IE hack around this at http://gist.github.com/362081. Special Note: Be AWARE, do not use upper-case when defining your data elements as it is all converted to lower-case when reading it, so: data-myAccount="A1234" will not be found when you read it with: element.dataset.myAccount Use only lowercase when reading so this will work: element.dataset.myaccount

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • How to Assure an Effective Data Model

    As a general rule in my opinion the effectiveness of a data model can be directly related to the accuracy and complexity of a project’s requirements. For example there is no need to work on very detailed data models when the details surrounding a specific data model have not been defined or even clarified. Developing data models when the clarity of project requirements is limited tends to introduce designed issues because the proper details to create an effective data model are not even known. One way to avoid this issue is to create data models that correspond to the complexity of the existing project requirements so that when requirements are updated then new data models can be created based any new discoveries regarding requirements on a fine grain level.  This allows for data models to be composed of general entities to be created initially when a project’s requirements are very vague and then the entities are refined as new and more substantial requirements are defined or redefined. This promotes communication amongst all stakeholders within a project as they go through the process of defining and finalizing project requirements.In addition, here are some general tips that can be applied to projects in regards to data modeling.Initially model all data generally and slowly reactor the data model as new requirements and business constraints are applied to a project.Ensure that data modelers have the proper tools and training they need to design a data model accurately.Create a common location for all project documents so that everyone will be able to review a project’s data models along with any other project documentation.All data models should follow a clear naming schema that tells readers the intended purpose for the data and how it is going to be applied within a project.

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • Implementing User-Defined Hierarchies in SQL Server Analysis Services

    To be able to drill into multidimensional cube data at several levels, you must implement all of the hierarchies on the database dimensions. Then you'll create the attribute relationships necessary to optimize performance. Analysis Services hierarchies offer plenty of possibilities for displaying the data that your business requires. Rob Sheldon continues his series on SQL Server Analysis Services 2008.

    Read the article

  • Populate a WCF syndication podcast using MP3 ID3 metadata tags

    - by brian_ritchie
    In the last post, I showed how to create a podcast using WCF syndication.  A podcast is an RSS feed containing a list of audio files to which users can subscribe.  The podcast not only contains links to the audio files, but also metadata about each episode.  A cool approach to building the feed is reading this metadata from the ID3 tags on the MP3 files used for the podcast. One library to do this is TagLib-Sharp.  Here is some sample code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: var taggedFile = TagLib.File.Create(f); 2: var fileInfo = new FileInfo(f); 3: var item = new iTunesPodcastItem() 4: { 5: title = taggedFile.Tag.Title, 6: size = fileInfo.Length, 7: url = feed.baseUrl + fileInfo.Name, 8: duration = taggedFile.Properties.Duration, 9: mediaType = feed.mediaType, 10: summary = taggedFile.Tag.Comment, 11: subTitle = taggedFile.Tag.FirstAlbumArtist, 12: id = fileInfo.Name 13: }; 14: if (!string.IsNullOrEmpty(taggedFile.Tag.Album)) 15: item.publishedDate = DateTimeOffset.Parse(taggedFile.Tag.Album); This reads the ID3 tags into an object for later use in creating the syndication feed.  When the MP3 is created, these tags are set...or they can be set after the fact using the Properties dialog in Windows Explorer.  The only "hack" is that there isn't an easily accessible tag for "subtitle" or "published date" so I used other tags in this example. Feel free to change this to meet your purposes.  You could remove the subtitle & use the file modified data for example. That takes care of the episodes, for the feed level settings we'll load those from an XML file: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: <?xml version="1.0" encoding="utf-8" ?> 2: <iTunesPodcastFeed 3: baseUrl ="" 4: title="" 5: subTitle="" 6: description="" 7: copyright="" 8: category="" 9: ownerName="" 10: ownerEmail="" 11: mediaType="audio/mp3" 12: mediaFiles="*.mp3" 13: imageUrl="" 14: link="" 15: /> Here is the full code put together. Read the feed XML file and deserialize it into an iTunesPodcastFeed classLoop over the files in a directory reading the ID3 tags from the audio files .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static iTunesPodcastFeed CreateFeedFromFiles(string podcastDirectory, string podcastFeedFile) 2: { 3: XmlSerializer serializer = new XmlSerializer(typeof(iTunesPodcastFeed)); 4: iTunesPodcastFeed feed; 5: using (var fs = File.OpenRead(Path.Combine(podcastDirectory, podcastFeedFile))) 6: { 7: feed = (iTunesPodcastFeed)serializer.Deserialize(fs); 8: } 9: foreach (var f in Directory.GetFiles(podcastDirectory, feed.mediaFiles)) 10: { 11: try 12: { 13: var taggedFile = TagLib.File.Create(f); 14: var fileInfo = new FileInfo(f); 15: var item = new iTunesPodcastItem() 16: { 17: title = taggedFile.Tag.Title, 18: size = fileInfo.Length, 19: url = feed.baseUrl + fileInfo.Name, 20: duration = taggedFile.Properties.Duration, 21: mediaType = feed.mediaType, 22: summary = taggedFile.Tag.Comment, 23: subTitle = taggedFile.Tag.FirstAlbumArtist, 24: id = fileInfo.Name 25: }; 26: if (!string.IsNullOrEmpty(taggedFile.Tag.Album)) 27: item.publishedDate = DateTimeOffset.Parse(taggedFile.Tag.Album); 28: feed.Items.Add(item); 29: } 30: catch 31: { 32: // ignore files that can't be accessed successfully 33: } 34: } 35: return feed; 36: } Usually putting a "try...catch" like this is bad, but in this case I'm just skipping over files that are locked while they are being uploaded to the web site.Here is the code from the last couple of posts.  

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Exposing WCF (SOAP\WSDL) Services

    Continuing in our series, I wanted to touch on how a RIA Services can be exposed as a Soap\WSDL service.   This is very useful if you want to enable the exact same business logic\data access logic is available to clients other than Silverlight.    For example to a WinForms application or WPF or even a console application.  SOAP is a particularly good model for interop with the Java\JEE world as well.    First you need to add a reference to Microsoft.ServiceModel.DomainSerivves.Hosting.EndPoints...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Exposing WCF (SOAP\WSDL) Services

    Continuing in our series, I wanted to touch on how a RIA Services can be exposed as a Soap\WSDL service.   This is very useful if you want to enable the exact same business logic\data access logic is available to clients other than Silverlight.    For example to a WinForms application or WPF or even a console application.  SOAP is a particularly good model for interop with the Java\JEE world as well.    First you need to add a reference to Microsoft.ServiceModel.DomainSerivves.Hosting.EndPoints...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WCF data services - Limiting related objects returned based on critera

    - by Mike Morley
    I have an object graph consisting of a base employee object, and a set of related message objects. I am able to return the employee objects based on search criteria on the employee properties (eg team) etc. However, if I expand on the messages, I get the full collection of messages back. I would like to be able to either take the top n messages (i.e. restrict to 10 most recent) or ideally use a date range on the message objects to limit how many are brought back. So far I have not been able to figure out a way of doing this: I get an error if I attempt to filter on properties on the message (&$filter=employee/message/StartDate gives an error "No property 'StartDate' exists in type 'System.Data.Objects.DataClasses.EntityCollection`1). Attempting to use Top on the message related object doesn't work either. I have also tried using a WebGet extension that takes a string list of employee IDs. That works until the list gets too long, and then fails due to the URL getting too long (it might be possible to setup a paging mechanism on this approach)... Unfortunately the UI control I am using requires the data to be in a fairly specific hierarchical shape, so I can't easily come at this from starting on the message side and working backwards. Outside of making multiple calls does anyone know of a method to accomplish this with wcf data services? Thanks! M.

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • Building services with .Net Part 1

    - by Allan Rwakatungu
    On the 26th of May 2010 , I made a presentation to the .NET user group meeting (thanks to Malisa Ncube for organizing this event every month … ). If you missed my presentation , we talked about why we should all be building services … better still using the .NET framework. This blog post is an introduction to services , why you would want to build services and how you can build services using the .NET framework. What is a service? OASIS defines service as "a mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description." [1]. If the above definition sounds to academic , you can also define a service as loosely coupled units of functionality that have no calls to each other embedded in the. Instead of services embedding calls to each other in their service code they use defined protocols that describe how services pass and parse messages. This is a good way to think about services if you’re from an objected oriented background. While in object oriented programming functions make calls to each other, in service oriented programming, functions pass messages between each other. Why would you want to use services? 1. If your enterprise architecture looks like this   Services are the building blocks for SOA . With SOA you can move away from the sphaggetti infrastructure that is common in most enterprises. The complexity or lack of visibility of the integration points in your enterprises makes it difficult and costly to implement new initiatives and changes into the business - and even impossible in some cases - as it is not possible to identify the impact a change in one system might have to other systems. With services you can move to an architecture like this Your building blocks from Spaghetti infrastructure to something that is more well-defined and manageable to achieve cost efficiency and not least business agility - enabling you to react to changes in the market with speed and achieve operational efficiency and control are services. 2. If you want to become the Gates or Zuckerburger. Have you heard about Web 2.0 ? Mashups? Software as a service (SAAS) ? Cloud computing ?   They all offer you the opportunity to have scalable but low cost business models and they built using services.  Some of my favorite companies that leverage services for their business models include  https://www.salesforce.com/ (cloud CRM) http://www. twitter.com (more people use twitter clients built by 3rd parties than their official clients) http://www.kayak.com/ (compares data from other travel sites to give information to users in one location) Services with the .NET framework      If you are a .NET developer and you want to develop services, Windows Communication Framework (WCF) is the tool for you. WCF is Microsoft’s unified programming model (service model) for building service oriented applications. ( Before .NET 3.0 you had several models for programming services in .NET including .NET remoting, Web services (ASMX), COM +, Microsoft Messaging queuing (MSMQ) etc, after .NET 3.0 the programming model was unified into one i.e. WCF ). Windows Communication Framework (WCF) provides you 1. An Software Development Kit (SDK) for creating SOA applications 2. A runtime for running services on the Windows platform Why should you use Windows Communication Foundation if you’re programming services?   1. It supports interoperable and open standards e.g. WS* protocols for programming SOAP services 2. It has a unified programming model. Whether you use TCP or Http or Pipes or transmitting using Messaging Queues, programmers need to learn just one way to program. Previously you had .NET remoting, MSMQ, Web services, COM+ and they were all done differently 3. Productive programming model You don’t have to worry about all the plumbing involved to write services. You have a rich declarative programming model to add stuff like logging, transactions, and reliable messages in-built in the Windows Communication Framework. Understanding services in WCF The basic principles of WCF are as easy as ABC A – Address This is where the service is located B- Binding This describes how you communicate with the service e.g. Use TCP, HTTP or both. How to exchange security information with the service etc. C – Contract This defines what the service can do. E.g. Pay water bill, Make a phone call A - Addresses In WCF, an address is a combination of transport, server name, port and path Example addresses may include http://localhost:8001 net.tcp://localhost:8002/MyService net.pipe://localhost/MyPipe net.msmq://localhost/private/MyService net.msmq://localhost/MyService B- Binding   There are numerous ways to communicate with services , different ways that a message can be formatted/sent/secured, that allows you to tailor your service for the compatibility/performance you require for your solution. Transport You can use HTTP TCP MSMQ , Named pipes, Your own custom transport etc Message You  can send a plain text binary, Message Transmission Optimization Mechanism (MTOM) message Communication security No security Transport security Message security Authenticating and authorizing callers etc Behaviour You service can support Transactions Be reliable Use queues Support ajax etc C - Contract You define what your service can do using Service contracts :- Define operations that your service can do, communications and behaviours Data contracts :- Define the messages that are passed from and into your service and how they are formatted Fault contracts :- Defines errors types in your service   As an example, suppose your service service shows money. You define your service contract using a interface [ServiceContract] public interface IShowMeTheMoney {   [OperationContract]    Money Show(); } You define the data contract by annotating a class it with the Data Contract attribute and fields you want to pass in the message as Data Members. (Note:- In the latest versions of WCF you dont have to use attributes if you passing all the objects properties in the message) [DataContract] public Money {   [DataMember]   public string Currency { get; set; }   [DataMember]   public Decimal Amount { get; set; }   public string Comment { get; set; } } Features of Windows Communication Foundation Windows Communication Foundation is not only simple but feature rich , offering you several options to tweak your service to fit your business requirements. Some of the features of WCF include 1. Workflow services You can combine WCF with Windows WorkFlow Foundation (WWF) to write workflow type services 2. Control how your data (messages) are transferred and serialized e.g. you can serialize your business objects as XML or binary 3. control over session management , instance creation and concurrency management without writing code if you like 4. Queues and reliable sessions. You can store messages from the sending client and later forward them to the receiving application. You can also guarantee that messages will arrive at their destincation. 5.Transactions:  You can have different services participate in a transaction operations that can be rolled back if needed 6. Security. WCF has rich features for authorization and authentication  as well as keep audit trails 7. Web programming model. WCF allows developers to expose services as non SOAP endpoints 8. Inbuilt features that you can use to write JSON and services that support AJAX applications And lots more In my next blog I will show you how you can use WCF features to write a real world business service.               Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ]] /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Creating a podcast feed for iTunes & BlackBerry users using WCF Syndication

    - by brian_ritchie
     In my previous post, I showed how to create a RSS feed using WCF Syndication.  Next, I'll show how to add the additional tags needed to turn a RSS feed into an iTunes podcast.   A podcast is merely a RSS feed with some special characteristics: iTunes RSS tags.  These are additional tags beyond the standard RSS spec.  Apple has a good page on the requirements. Audio file enclosure.  This is a link to the audio file (such as mp3) hosted by your site.  Apple doesn't host the audio, they just read the meta-data from the RSS feed into their system. The SyndicationFeed class supports both AttributeExtensions & ElementExtensions to add custom tags to the RSS feeds. A couple of points of interest in the code below: The imageUrl below provides the album cover for iTunes (170px × 170px) Each SyndicationItem corresponds to an audio episode in your podcast So, here's the code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: XNamespace itunesNS = "http://www.itunes.com/dtds/podcast-1.0.dtd"; 2: string prefix = "itunes"; 3:   4: var feed = new SyndicationFeed(title, description, new Uri(link)); 5: feed.Categories.Add(new SyndicationCategory(category)); 6: feed.AttributeExtensions.Add(new XmlQualifiedName(prefix, 7: "http://www.w3.org/2000/xmlns/"), itunesNS.NamespaceName); 8: feed.Copyright = new TextSyndicationContent(copyright); 9: feed.Language = "en-us"; 10: feed.Copyright = new TextSyndicationContent(DateTime.Now.Year + " " + ownerName); 11: feed.ImageUrl = new Uri(imageUrl); 12: feed.LastUpdatedTime = DateTime.Now; 13: feed.Authors.Add(new SyndicationPerson() {Name=ownerName, Email=ownerEmail }); 14: var extensions = feed.ElementExtensions; 15: extensions.Add(new XElement(itunesNS + "subtitle", subTitle).CreateReader()); 16: extensions.Add(new XElement(itunesNS + "image", 17: new XAttribute("href", imageUrl)).CreateReader()); 18: extensions.Add(new XElement(itunesNS + "author", ownerName).CreateReader()); 19: extensions.Add(new XElement(itunesNS + "summary", description).CreateReader()); 20: extensions.Add(new XElement(itunesNS + "category", 21: new XAttribute("text", category), 22: new XElement(itunesNS + "category", 23: new XAttribute("text", subCategory))).CreateReader()); 24: extensions.Add(new XElement(itunesNS + "explicit", "no").CreateReader()); 25: extensions.Add(new XDocument( 26: new XElement(itunesNS + "owner", 27: new XElement(itunesNS + "name", ownerName), 28: new XElement(itunesNS + "email", ownerEmail))).CreateReader()); 29:   30: var feedItems = new List<SyndicationItem>(); 31: foreach (var i in Items) 32: { 33: var item = new SyndicationItem(i.title, null, new Uri(link)); 34: item.Summary = new TextSyndicationContent(i.summary); 35: item.Id = i.id; 36: if (i.publishedDate != null) 37: item.PublishDate = (DateTimeOffset)i.publishedDate; 38: item.Links.Add(new SyndicationLink() { 39: Title = i.title, Uri = new Uri(link), 40: Length = i.size, MediaType = i.mediaType }); 41: var itemExt = item.ElementExtensions; 42: itemExt.Add(new XElement(itunesNS + "subtitle", i.subTitle).CreateReader()); 43: itemExt.Add(new XElement(itunesNS + "summary", i.summary).CreateReader()); 44: itemExt.Add(new XElement(itunesNS + "duration", 45: string.Format("{0}:{1:00}:{2:00}", 46: i.duration.Hours, i.duration.Minutes, i.duration.Seconds) 47: ).CreateReader()); 48: itemExt.Add(new XElement(itunesNS + "keywords", i.keywords).CreateReader()); 49: itemExt.Add(new XElement(itunesNS + "explicit", "no").CreateReader()); 50: itemExt.Add(new XElement("enclosure", new XAttribute("url", i.url), 51: new XAttribute("length", i.size), new XAttribute("type", i.mediaType))); 52: feedItems.Add(item); 53: } 54:   55: feed.Items = feedItems; If you're hosting your podcast feed within a MVC project, you can use the code from my previous post to stream it. Once you have created your feed, you can use the Feed Validator tool to make sure it is up to spec.  Or you can use iTunes: Launch iTunes. In the Advanced menu, select Subscribe to Podcast. Enter your feed URL in the text box and click OK. After you've verified your feed is solid & good to go, you can submit it to iTunes.  Launch iTunes. In the left navigation column, click on iTunes Store to open the store. Once the store loads, click on Podcasts along the top navigation bar to go to the Podcasts page. In the right column of the Podcasts page, click on the Submit a Podcast link. Follow the instructions on the Submit a Podcast page. Here are the full instructions.  Once they have approved your podcast, it will be available within iTunes. RIM has also gotten into the podcasting business...which is great for BlackBerry users.  They accept the same enhanced-RSS feed that iTunes uses, so just create an account with them & submit the feed's URL.  It goes through a similar approval process to iTunes.  BlackBerry users must be on BlackBerry 6 OS or download the Podcast App from App World. In my next post, I'll show how to build the podcast feed dynamically from the ID3 tags within the MP3 files.

    Read the article

  • Welcome Oracle Data Integration 12c: Simplified, Future-Ready Solutions with Extreme Performance

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 The big day for the Oracle Data Integration team has finally arrived! It is my honor to introduce you to Oracle Data Integration 12c. Today we announced the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions Extreme Performance Fast Time-to-Value       There are many new features that support these key differentiators for Oracle Data Integrator 12c and for Oracle GoldenGate 12c. In this first 12c blog post, I will highlight only a few:·Future-Ready Solutions to Support Current and Emerging Initiatives: Oracle Data Integration offer robust and reliable solutions for key technology trends including cloud computing, big data analytics, real-time business intelligence and continuous data availability. Via the tight integration with Oracle’s database, middleware, and application offerings Oracle Data Integration will continue to support the new features and capabilities right away as these products evolve and provide advance features. E    Extreme Performance: Both GoldenGate and Data Integrator are known for their high performance. The new release widens the gap even further against competition. Oracle GoldenGate 12c’s Integrated Delivery feature enables higher throughput via a special application programming interface into Oracle Database. As mentioned in the press release, customers already report up to 5X higher performance compared to earlier versions of GoldenGate. Oracle Data Integrator 12c introduces parallelism that significantly increases its performance as well. Fast Time-to-Value via Higher IT Productivity and Simplified Solutions:  Oracle Data Integrator 12c’s new flow-based declarative UI brings superior developer productivity, ease of use, and ultimately fast time to market for end users.  It also gives the ability to seamlessly reuse mapping logic speeds development.Oracle GoldenGate 12c ‘s Integrated Delivery feature automatically optimally tunes the process, saving time while improving performance. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. On November 12th we will reveal much more about the new release in our video webcast "Introducing 12c for Oracle Data Integration". Our customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Please join us at this free event to learn more from our executives about the 12c release, hear our customers’ perspectives on the new features, and ask your questions to our experts in the live Q&A. Also, please continue to follow our blogs, tweets, and Facebook updates as we unveil more about the new features of the latest release. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • WCF Maximum Read Depth Exception

    - by Burt
    I get the following exception when trying to pass a DTO over WCF services. System.Xml.XmlException: The maximum read depth (32) has been exceeded because XML data being read has more levels of nesting than is allowed by the quota. This quota may be increased by changing the MaxDepth property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 5230. at System.Xml.XmlExceptionHelper.ThrowXmlException The app.config binding looks like this <binding name="WSHttpBinding_IProjectWcfService" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="10240000" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="200" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> <message clientCredentialType="UserName" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> Web.config service behaviour: <behavior name="BehaviorConfiguration" > <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <dataContractSerializer maxItemsInObjectGraph="6553600" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="SqlRoleProvider" /> <serviceCredentials> <serviceCertificate findValue="localhost" x509FindType="FindBySubjectName" storeLocation="LocalMachine" storeName="My" /> <userNameAuthentication userNamePasswordValidationMode="MembershipProvider" membershipProviderName="SqlMembershipProvider"/> </serviceCredentials> </behavior> And the DTO looks like this: [Serializable] [DataContract(IsReference=true)] public class MyDto { Any help would be appreciated as I am pulling my hair out with it.

    Read the article

  • WCF Service Exception

    - by Maciek
    Hiya, I'm currently working on an Silverlight 3 project, I'm using 2 machines to test it. "harbinger" is the web server running Win7 + IIS . I've deployed the webpage and the WCF webservice to that machine. I've entered the following url's in my browser : http://harbinger:43011/UserService.svc http://harbinger:43011/UserService.svc?wsdl and got pages load expected contents for both Next I've decided to check if I can call the webservice from my machine, I've added the ServiceReference, executed a call to one of the methods and .... BOOM : System.ServiceModel.CommunicationException was unhandled by user code Message="An error occurred while trying to make a request to URI 'http://harbinger:43011/UserService.svc'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details." StackTrace: at System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result) at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result) at System.ServiceModel.ClientBase`1.ChannelBase`1.EndInvoke(String methodName, Object[] args, IAsyncResult result) at Energy.USR.UserServiceClient.UserServiceClientChannel.EndGetAllUsers(IAsyncResult result) at Energy.USR.UserServiceClient.Energy.USR.UserService.EndGetAllUsers(IAsyncResult result) at Energy.USR.UserServiceClient.OnEndGetAllUsers(IAsyncResult result) at System.ServiceModel.ClientBase`1.OnAsyncCallCompleted(IAsyncResult result) InnerException: System.Security.SecurityException Message="" StackTrace: at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state) at System.Net.Browser.BrowserHttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result) InnerException: System.Security.SecurityException Message="Security error." StackTrace: at System.Net.Browser.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult) at System.Net.Browser.BrowserHttpWebRequest.<>c__DisplayClass5.<EndGetResponse>b__4(Object sendState) at System.Net.Browser.AsyncHelper.<>c__DisplayClass2.<BeginOnUI>b__0(Object sendState) InnerException: Can someone explain what just happened? What do I need to do to avoid this?

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • WCF Self-hosted service, client clean-up on service stop

    - by Sentax
    Hi everyone, I'm curious to know how I would go about setting up my service to stop cleanly on the server the service will be installed on. For example when I have many clients connecting and doing operations every minute and I want to shut-down the service for maintenance, how can I do this in the "OnStop" event of the service to then let the main service host to deny any new client connections and let the current connections finish before it actually shuts down its services to the client, this will ensure data isn't corrupted on the server as the server shuts down. Right now I'm not setup as a singleton because I need scalability in the service. So I would have to somehow get my service host to do this independently of knowing how many instances are created of the service class. I hope I explain myself good enough, if not, let me know, I'll try to explain better. Thanks, Scott

    Read the article

  • How to write a generic service in WCF

    - by rezaxp
    In one of my recent projects I needed a generic service as a facade to handle General activities such as CRUD.Therefor I searched as Many as I could but there was no Idea on generic services so I tried to figure it out by my self.Finally,I found a way :Create a generic contract as below :[ServiceContract] public interface IEntityReadService<TEntity>         where TEntity : EntityBase, new()     {         [OperationContract(Name = "Get")]         TEntity Get(Int64 Id);         [OperationContract(Name = "GetAll")]         List<TEntity> GetAll();         [OperationContract(Name = "GetAllPaged")]         List<TEntity> GetAll(int pageSize, int currentPageIndex, ref int totalRecords);         List<TEntity> GetAll(string whereClause, string orderBy, int pageSize, int currentPageIndex, ref int totalRecords);            }then create your service class :  public class GenericService<TEntity> :IEntityReadService<TEntity> where TEntity : EntityBase, new() {#region Implementation of IEntityReadService<TEntity>         public TEntity Get(long Id)         {             return BusinessController.Get(Id);         }         public List<TEntity> GetAll()         {             try             {                 return BusinessController.GetAll().ToList();             }             catch (Exception ex)             {                                  throw;             }                      }         public List<TEntity> GetAll(int pageSize, int currentPageIndex, ref int totalRecords)         {             return                 BusinessController.GetAll(pageSize, currentPageIndex, ref totalRecords).ToList();         }         public List<TEntity> GetAll(string whereClause, string orderBy, int pageSize, int currentPageIndex, ref int totalRecords)         {             return                 BusinessController.GetAll(pageSize, currentPageIndex, ref totalRecords, whereClause, orderBy).ToList();         }         #endregion} Then, set your EndPoint configuration in this way :<endpoint address="myAddress" binding="basicHttpBinding" bindingConfiguration="myBindingConfiguration1" contract="Contracts.IEntityReadService`1[[Entities.mySampleEntity, Entities]], Service.Contracts" />

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >