Search Results

Search found 17222 results on 689 pages for 'integration services'.

Page 35/689 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Limiting calls to WCF services from BizTalk

    - by IntegrationOverload
    ** WORK IN PROGRESS ** This is just a placeholder for the full article that is in progress. The problem My BTS solution was receiving thousands of messages at once. After processing by BTS I needed to send them on via one of several WCF services depending on the message content. The problem is that due to the asynchronous nature of BizTalk the WCF services were getting hammered and could not cope with the load. Note: It is possible to limit the SOAP calls in the BtsNtSvc.exe.Config file but that does not have the desired results for Net-TCP WCF services. The solution So I created a new MessageType for the messages in question and posted them to the BTS messaeg box. This schema included the URL they were being sent to as a promoted property. I then subscribed to the message type from a new orchestraton (that does just the WCF send) using the URL as a correlation ID. This created a singleton orchestraton that was instantiated when the first message hit the message box. It then waits for further messages with the same correlation ID and type and processs them one at a time using a loop shape with a timer (A pretty standard pattern for processing related messages) Image to go here This limits the number of calls to the individual WCF services to 1. Which is a good start but the service can handle more than that and I didn't want to create a bottleneck. So I then constructed the Correlation ID using the URL concatinated with a random number between 1 and 10. This makes 10 possible correlation IDs per URL and so 10 instances of the singleton Orchestration per WCF service. Just what I needed and the upper random number is a configuration value in SSO so I can change the maximum connections without touching the code.

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • xVelocity engines compared: VertiPaq vs ColumnStore #ssas #vertipaq #xvelocity #sql #tabular

    - by Marco Russo (SQLBI)
    During the last months I and Alberto worked in several projects using Analysis Services Tabular and we had to face real world issues, such as complex queries, large data volume, frequent data updates and so on. Sometime we faced the challenge of comparing Tabular performance with SQL Server. It seemed a non-sense, because even if the same core xVelocity technology is implemented in both products (SQL Server 2012 uses ColumnStore indexes, whereas Analysis Services 2012 uses VertiPaq), we initially assumed that the better optimization for the in-memory engine used by Analysis Services would have been always better than SQL Server. However, we discovered several important things: Processing time might be different and having data on SQL Server could make ColumnStore way faster for processing. Partitioning in SQL Server might be much more effective for query performance than Analysis Services. A single query can scale easily on more processor on SQL Server, whereas in Analysis Services the formula engine is single-threaded and could be a bottleneck for certain queries. In case of a large workload with many concurrent users, storage engine cache in Analysis Services could be a big advantage over SQL Server, especially for scalability As you can see, these considerations are not always obvious and you might be tempted to make other assumptions based on these information. Well, don’t do that. Before anything else, read the whitepaper VertiPaq vs ColumnStore Comparison written by Alberto Ferrari. Then, measure your workload. Finally, make some conclusion. But don’t make too many assumptions. You might be wrong, as we did at the beginning of this journey.

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • Silverlight 4 WCF RIA Services and MVVM is not as simple

    - by Thomas Jaskula
    [Disclaimer: I'm ASP.NET MVC Developer] Hi, I'm looking for some best practices with implementing MVVM pattern with WCF RIA in Silverlight 4. I'm not looking to use MEF of IoC for locating my ViewModels. What I would like to know is how to apply MVVM pattern with Silverlight 4 and WCF RIA. I don't want to use other stuff like Prism or MVVM Light toolkit. I found many examples on Internet showing how it is wonderful to drag and drop a datasource on the view and the job is done (it reminds me about my first VB6 developments). I tried to implement MVVM with WCF RIA and it's not strightforward at all. If I understand, the MVVM should contain all the logic in order to unit test it in isolation but when it comes to combine it with WCF RIA it's another story. I have the following questions. Can I use a generated metadata as model ? It would be easier to use it that if I write all from the scratch. As I saw the only way I could get data is through DomainContext or through direct binding in the view (local ressource). I don't want the direct binding in the view, not testable at all. On the other hand I can't use DomainContext, it doesn't expose any single entity !!! All I have is the EntitySet that I can bind to datagrid. How do I bind a single Entity to the DataForm from the ViewModel ? How do I udpate the model to the database ? How do I navigate from one Entity to a collection of it's itemps. For example if I have a Company Entity I would like to show a DataFrom to update a entite informations and a datagrid to show companies adresses. When saving a form would like to save information to Company and for example an information avout which adress was selected as active. Please help me understand how to do it well. Or maybe I should drop the WCF RIA and to do it with WCF from scratch ? What do you think ?

    Read the article

  • Reporting Services 2010 RDLC: Passing Querystring Parameters from an RDLC

    - by Brian MacKay
    I'm trying to build a simple RDLC report that shows some data, and has a 'select' link that sends the browser off to a certain url with some data in the querystring (a key). In the vs2010 report designer, I can double-click on the column, then select action, and there are a bunch of thigns that seem like they might work. But none of them do. Under 'enable as a hyperlink' I can pick 'go to url' but there aren't any parameter options to pass. I also tried 'go to report' on the off chance that I could trick it into doing what I want. Here there are parameter options, but it knows that my url is not a report and the "select" link renders as text (not clickable). Any ideas? I'm pretty sure this used to work in vs2008, and it seems like something that must be doable. But I've been pulling out my hair for several hours on this one.

    Read the article

  • Problem finding office DCOM in Component Services in Windows 7

    - by Tomas I
    I have a problem getting my word and excel to work in ASP .NET. I get the error message: {System.UnauthorizedAccessException: Retrieving the COM class factory for component with CLSID {000209FF-0000-0000-C000-000000000046} failed due to the following error: 80070005. at xxx.Utility.WordDocument..ctor(String filePath, HttpServerUtility util) at customer_communication.BuCreate_click(Object sender, EventArgs e) in This means I have access problem to the DCOM files. In Vista this isnt a problem, all I have to do there is to run "dcomcnfg" and in there find the Microsoft Excel dcom file. In Windows 7 I cant find it, and I have no idea what to do now... If anyone could help me that would be great!

    Read the article

  • How to eager load in WCF Ria Services/Linq2SQLDomainModel

    - by Aggelos Mpimpoudis
    I have a databound grid at my view (XAML) and the Itemsource points to a ReportsCollection. The Reports entity has three primitives and some complex types. These three are shown as expected at datagrid. Additionally the Reports entity has a property of type Store. When loading Reports via GetReports domain method, I quickly figure out that only primitives are returned and not the whole graph of some depth. So, as I wanted to load the Store property too, I made this alteration at my domain service: public IQueryable<Report> GetReports() { return this.ObjectContext.Reports.Include("Store"); } From what I see at the immediate window, store is loaded as expected, but when returned to client is still pruned. How can this be fixed? Thank you!

    Read the article

  • RIA Services Filter descriptor

    - by Mohit
    I have a Filterdescriptor as shown below. The propertypath is of type 'char?' I get following InvalidOperationException when I filter by entering a value Y InnerException {System.InvalidOperationException: A FilterDescriptor with its PropertyPath equal to 'Valid' cannot be evaluated. --- System.ArgumentException: Operator 'StartsWith' incompatible with operand types 'Char?' and 'Char?' --- System.ArgumentNullException: Value cannot be null. Parameter name: method at System.Linq.Expressions.Expression.ValidateCallArgs(Expression instance, MethodInfo method, ReadOnlyCollection1& arguments) at System.Linq.Expressions.Expression.Call(Expression instance, MethodInfo method, IEnumerable1 arguments) at System.Linq.Expressions.Expression.Call(Expression instance, MethodInfo method, Expression[] arguments) at System.Windows.Controls.LinqHelper.GenerateMethodCall(String methodName, Expression left, Expression right) at System.Windows.Controls.LinqHelper.GenerateStartsWith(Expression left, Expression right) at System.Windows.Controls.LinqHelper.BuildFilterExpression(Expression propertyExpression, FilterOperator filterOperator, Expression valueExpression, Boolean isCaseSensitive, Expression& filterExpression) --- End of inner exception stack trace --- --- End of inner exception stack trace ---} System.Exception {System.InvalidOperationException}

    Read the article

  • Using RIA Services FilterDescriptor from code behind

    - by Fermin
    Hi, I was wondering if it's possible to use the FilterDescriptor control from code behind? On the page load of my form I set the datasource of a grid in the code behind, not using a DomainDataSource control, like: TestDomainContext context = new TestDomainContext(); dataGridEmployees.ItemsSource = context.EmployeePositions; context.Load(context.GetEmployeesWithPositionQuery()); I have a textbox on my page that the user can enter into to filter on employee position. Is it now possible to add FilterDescriptor to the source of the DataGrid in code behind? Or would I manually need to filter the results of the context.GetEmployeesWithPositionQuery, for example on KeyUp event of the filter TextBox?

    Read the article

  • Reporting Services: Two Tables One Sum

    - by Neomoon
    My report is as follows: One table provides financial information with sums at the group footer (Grouping is called "StockTable_Shipped"). The group is controlled by a boolean value (1=shows shipped data, 0 = shows received data) The second table is a variance report for data that has been shipped (boolean value of 1) and has a sum at the bottom of the table. My ultimate goal is to take the sum from table1 where shipped=1 and subtract it from the variance sum from table2. This will be placed in a textbox at the bottom of the report. I understand if this sounds confusing but I would be more then happy to provide more information.

    Read the article

  • Hitting a ADO.NET Data Services from WPF client, forms authentication

    - by Soulhuntre
    Hey all! There are a number of questiosn on StackOverflow that ALMOST hit this topic head on, but they are either for other technologies, reference obsolets information or don;t supply an answer that I can suss out. So pardon the almost duplication :) I have a working ADO.NET Data Service, and a WPF client that hits it. Now that they are working fine I want to add authentication / security to the system. My understanding of the steps so far is... Turn on forms authentication and configure it on the server (I have an existing asp.net membership service DB for other aspects of this app, so that isnt a problem) so that it is required for the service URL In WCF apply for and recieve a forms authentication "ticket" as part of a login routine Add that "ticket" to the headers of the ADO.NET service calls in WPF Profit! All well and good - but does anyone have a line on a soup to nuts code sample, using the modern releases of these technologies? Thanks!

    Read the article

  • Exchange Web Services, try to use ExchangeImpersonationType ...

    - by howmanytimes
    I am trying to use EWS, first time trying to use the ExchangeServiceBinding. The code I am using is below: _service = new ExchangeServiceBinding(); //_service.Credentials = new NetworkCredential(userName, userPassword, this.Domain); _service.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials; _service.Url = this.ServiceURL; ExchangeImpersonationType ei = new ExchangeImpersonationType(); ConnectingSIDType sid = new ConnectingSIDType(); sid.PrimarySmtpAddress = this.ExchangeAccount; ei.ConnectingSID = sid; _service.ExchangeImpersonation = ei; The application is an aspnet 3.5 trying to create a task using EWS. I have tried to use impersonation because I will not know the logon user's domain password, so I thought impersonation would be the best fit. Any thoughts on how I can utilize impersonation? Am I setting this correctly, I get an error while trying to run my application. I also tried without impersonation just to try to see if I can create a task, no luck either. Any help would be appreciated. Thanks.

    Read the article

  • Unable to list owned images and running instances from Amazon Web Services using Zend Framework

    - by Marcel Tjandraatmadja
    I am using Zend Framework's library to manage EC2 instances and AMI. However I can't list the AMI's I own and can't list existing EC2 instances. $ec2Instance = new Zend_Service_Amazon_Ec2_Instance($awsAccessKey, $awsSecretKey); $instances = $ec2Instance ->describe(); $ec2Instance -describe() should list all instances but it is returning no instances even though I have three of them running at this time. $ami = new Zend_Service_Amazon_Ec2_Image($awsAccessKey, $awsSecretKey); $images = $ami->describe(); $ami-describe() returns all the public images but none of them are the ones I created even though I have two AMIs. Does any one know what I am missing here?

    Read the article

  • silverlight master-detail with two listboxes in pure xaml with ria services throwing exception

    - by Sam
    Hi, I was trying to achieve master-detail with 2 ListBox, 2 DomainDataSource and a IValueConverter, when entering the page it throws the random error it does when your xaml is invalid: "AG_E_PARSER_BAD_PROPERTY_VALUE [Line: 24 Position: 61]" Which is in fact the start position of where I am binding the listbox selected item with converter to the parameter's value of my DomainDataSource. I would love to achieve this by pure xaml, I did it by code behind and that works but I don't like it :p When the parameter is a hard-coded integer 1, it works, so I assume it's the value binding My code is below here, thanks in advance for at least looking :) (taken into accound all the xmlns's & usings are correct) Xaml: <Grid x:Name="LayoutRoot"> <Grid.Resources> <helpers:ListItemtoIdListValueConverter x:Key="mListConverter" /> </Grid.Resources> <riacontrols:DomainDataSource x:Name="GetLists" DomainContext="{StaticResource DbContext}" LoadSize="20" QueryName="GetLists" AutoLoad="True" /> <riacontrols:DomainDataSource x:Name="GetListItems" DomainContext="{StaticResource DbContext}" LoadSize="20" QueryName="GetListItemsById" AutoLoad="True"> <riacontrols:DomainDataSource.QueryParameters> <riadata:Parameter ParameterName="id" Value="{Binding ElementName=ListBoxLists, Path=SelectedItem, Converter={StaticResource mListConverter}}" /> </riacontrols:DomainDataSource.QueryParameters> </riacontrols:DomainDataSource> <activity:Activity IsActive="{Binding IsBusy, ElementName=ListBoxListItems}"> <StackPanel Orientation="Horizontal"> <ListBox x:Name="ListBoxLists" ItemsSource="{Binding Data, ElementName=GetLists, Mode=OneWay}" Width="150" Margin="0,0,10,10" /> <ListBox x:Name="ListBoxListItems" ItemsSource="{Binding Data, ElementName=GetListItems, Mode=OneWay}" Width="150" Margin="0,0,10,10" /> </StackPanel> </activity:Activity> </Grid> IValueConverter: public class ListItemtoIdListValueConverter: IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { list mList = (list)value; if (mList != null) return mList.id; else return null; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion }

    Read the article

  • Amazon Web services - retrieving a wishlist

    - by izb
    I've been tinkering with Yahoo Pipes and the Amazon E-Commerce Service (ECS) SDK to retrieve my wishlist. The problem is that although I can get all the items on my wishlist just fine, it seems to include items that I've deleted too. Has anyone else used this API and noticed this? Is there a way around it? UPDATE: Requested additional information in comments... Here is the URL I use to fetch the wishlist XML: http://webservices.amazon.co.uk/onca/xml?SubscriptionId=[my subs id]&Service=AWSECommerceService&ResponseGroup=ListItems&ProductPage=1&ProductGroup=Book&Operation=ListLookup&ListType=WishList&ListId=[my list id] And here is the relevant part of the XML response: <ListId>[my list id]</ListId> <ListName>Wishlist</ListName> <TotalItems>132</TotalItems> <TotalPages>14</TotalPages> <ListItem> <ListItemId>EPIE5559HKT391</ListItemId> <DateAdded>2003-11-17</DateAdded> <QuantityDesired>1</QuantityDesired> <QuantityReceived>0</QuantityReceived> <Item> <ASIN>5557205521</ASIN> <ItemAttributes> <Title>Horton hears a who</Title> </ItemAttributes> </Item> </ListItem> ... The rest of the XML is just either more list items like that, or information about the request at the top of the response.

    Read the article

  • reporting services: use a custom assembly with a local (RDLC) report

    - by JMarsch
    Hello: I am designing a report that will be used in local mode (an RDLC file) in a Winform app. I have a custom assembly with a static class that has some functions that I want to use inside of the report (as expressions). I have found all sorts of help for doing this with RDL reports, but I'm running into a permissions problem with my RDLC report. I get the following error at runtime: "The report references the code module (my module), which is not a trusted assembly". I know that this is some kind of a code security issue, but I'm not sure what to do to fix it. The documentation that I have seen online is aimed at RDL reports, and it instructs me to edit a SQL Server-specific policy file. I'm using RDLC, so there is no sql server involved. What do I need to do to acquire the appropriate permissions?

    Read the article

  • MDX: Problem filtering results in MDX query used in Reporting Services query

    - by wgpubs
    Why aren't my results being filtered by the members from my [Group Hierarchy] returned via the filter() statment below? SELECT NON EMPTY {[Measures].[Group Count], [Measures].[Overall Group Count] } ON COLUMNS, NON EMPTY { [Survey].[Surveys By Year].[Survey Year].ALLMEMBERS * [Response Status].[Response Status].[Response Status].ALLMEMBERS} DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( { [Survey Type].[Survey Type Hierarchy].&[9] } ) ON COLUMNS FROM ( SELECT ( { [Response Status].[Response Status].[All] } ) ON COLUMNS FROM ( SELECT ( STRTOSET(@SurveySurveysByYear, CONSTRAINED) ) ON COLUMNS FROM ( SELECT(filter([Group].[Group Hierarchy].members, instr(@GroupGroupFullName,[Group].[Group Hierarchy].Properties( "Group Full Name" )))) on columns FROM [SysSurveyDW])))) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS

    Read the article

  • Java Web Service - Faulty Services - ClassNotFound Exception

    - by Epitaph
    My Project has 2 java files (A.java and B.java in same package). A.java uses methods in B.java. And, an external jar has been added in the project build path. In order to create a web service (bottom up) from the class, I created a new Dynamic Web Project in Eclipse with axis2 as the runtime platform, and imported A.java and B.java source files. Next, since all my methods that need to be exposed are contained in A.java, I right click on it and created web service using the standard settings. When I deploy the web service on my apache, I get "Fault Service" and a few ClassNotFound Exceptions for some of the classes in my external jar file (I have already imported it as an external jar). Does the external jar needs to be imported in another way?

    Read the article

  • Update a single field from a single entity with ria-services

    - by TimothyP
    There are situations where I only want to update a specific field of a single entity in the database. I loaded the entities of that type into my silverlight application, and I know they are constantly changing on the server... but there is one field which has to be set by the silverlight client... the server will only read it. How can I just send the new data for that field to the server? Example an Entity called "TextField". I have a list of TextFields loaded in the silverlight application and every now and then the user will update the Preload (string) property of an entity and that has to go back to the server without changing anything else on the server. I tried adding a simple SetPreloadText(...) method to the DomainService but that just makes Silverlight crash with some odd error code. Is there a way to this? Am I working against the idea of Silverlight here? I really don't want to send the entire object back because know that at any given time the version on the client will most likely be out of date. (which is ok for this specific application)

    Read the article

  • Checkbox on Sql Server Reporting Services Report

    - by George Handlin
    I'm working on a report in SSRS 2005 that is a questionnaire with yes/no answers. Trying to get a checkbox on the report. Have tried using windings for the font and an iif statement to set the character, but that doesn't come out correctly when exporting to PDF. I'm using local reports, not from a report server.

    Read the article

  • RIA Services Repository Save does not work!?

    - by Savvas Sopiadis
    Hello everybody! Doing my first SL4 MVVM RIA based application and i ran into the following situation: updating a record (EF4,NO-POCOS!!) in the SL-client seems to take place, but values in the dbms are unchanged. Debugging with Fiddler the message on save is (amongst others): EntityActions.nil? b9http://schemas.microsoft.com/2003/10/Serialization/Arrays^HasMemberChanges?^Id?^ Operation?Update I assume that this says only: hey! the dbms should do an update on this record, AND nothing more! Is that right?! I 'm using a generic repository like this: public class Repository<T> : IRepository<T> where T : class { IObjectSet<T> _objectSet; IObjectContext _objectContext; public Repository(IObjectContext objectContext) { this._objectContext = objectContext; _objectSet = objectContext.CreateObjectSet<T>(); } public IQueryable<T> AsQueryable() { return _objectSet; } public IEnumerable<T> GetAll() { return _objectSet.ToList(); } public IEnumerable<T> Find(Expression<Func<T, bool>> where) { return _objectSet.Where(where); } public T Single(Expression<Func<T, bool>> where) { return _objectSet.Single(where); } public T First(Expression<Func<T, bool>> where) { return _objectSet.First(where); } public void Delete(T entity) { _objectSet.DeleteObject(entity); } public void Add(T entity) { _objectSet.AddObject(entity); } public void Attach(T entity) { _objectSet.Attach(entity); } public void Save() { _objectContext.SaveChanges(); } } The DomainService Update Method is the following: [Update] public void UpdateCulture(Culture currentCulture) { if (currentCulture.EntityState == System.Data.EntityState.Detached) { this.cultureRepository.Attach(currentCulture); } this.cultureRepository.Save(); } I know that the currentCulture-Entity is detached. What confuses me (amongst other things) is this: is the _objectContext still alive? (which means it "will be"??? aware of the changes made to record, so simply calling Attach() and then Save() should be enough!?!?) What am i missing? Development Environment: VS2010RC - Entity Framework 4 (no POCOs) Thanks in advance

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >