Search Results

Search found 74550 results on 2982 pages for 'wcf data service'.

Page 31/2982 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • What data structure to use / data persistence

    - by Dave
    I have an app where I need one table of information with the following fields: field 1 - int or char field 2 - string (max 10 char) field 3 - string (max 20 char) field 4 - float I need the program to filter on field 1 based upon a segmented control and select a field 2 from a picker. From this data I need to look up field 4 to use in a calculation. Total records will be about 200. I never see it go above 400 - 500. I am going to use a singleton which I am able to do, I just need help with the structure for this with data persistence. What type of data structure should I use for this and should I use NSNumber, NSString, etc. or old data types like float, Char, etc. I thought about a struct put into an array but there is probably a better way. This is new to me so any help or reference to examples would be great. I also thought about a plist or dictionary but it looks like it is just a lookup and a field which obviously won't work. Core data looked like overkill to me. Also, with any recommendation how should I get initial data into it? I want the user to be able to edit and add to the database. Sorry for the old terms, you can see what generation I am from... Thanks in advance!!!!

    Read the article

  • Role of Microsoft certifications ADO.Net, ASP.Net, WPF, WCF and Career?

    - by Steve Johnson
    I am a Microsoft fan and .Net enthusiast. I want to align my career in the lines of current and future .Net technologies. I have an MCTS in ASP.Net 3.5. The question is about the continuation of certifications and my career growth and maybe a different job! I want to keep pace with future Microsoft .Net technologies. My current job however doesn't allow so.So i bid to do .Net based certifications to stay abreast with latest .Net technologies. My questions: What certifications should i follow next? I have MCTS .Net 3.5 WPF(Exam 70-502) and MCTS .Net 3.5 WCF(Exam 70-504) in my mind so that i can go for Silverlight development and seek jobs related to Silverlight development. What other steps i need to take in order to develop professional expertise in technologies such as WPF, WCF and Silverlight when my current employer is reluctant to shift to latest .Net technologies? I am sure that there are a lot of people of around here who are working with .Net technologies and they have industrial experience. I being a new comer and starter in my career need to take right decision and so i am seeking help from this community in guiding me to the right path. Expert replies are much appreciated and thanks in advance. Best Regards Steve.

    Read the article

  • How to access WCF RIA service from Windows Service?

    - by Duncan Bayne
    I have a functioning SL4 application; inside the ClientBin directory I have an .svc file that describes my service: <% @ServiceHost Service="MyApp.Services.MyServiceFactory="System.ServiceModel.DomainServices.Hosting.DomainServiceHostFactory" %> When I browse to http://localhost:52878/ClientBin/MyApp-Services-MyService.svc I see the following: "You have created a service. To test this service, you will need to create a client and use it to call the service. You can do this using the svcutil.exe tool from the command line with the following syntax: svcutil.exe http://localhost:52878/ClientBin/MyApp-Services-MyService.svc?wsdl" I want to access that service from a Windows Service application. My understanding is that I need to enable SOAP end-points in order to make this happen. So, I add the following to my web.config file: <domainServices> <endpoints> <add name="soap" type="System.ServiceModel.DomainServices.Hosting.SoapXmlEndpointFactory, System.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </endpoints> </domainServices> Firstly, Intellisense complains about the presence of the tag, saying "The element system.ServiceModel has invalid child element domainServices." Secondly, the aforementioned Silverlight application stops working, presumably because this change breaks the underlying web services. Thirdly, it appears that the System.ServiceModel.DomainServices.Hosting assembly doesn't actually contain the SoapXmlEndpointFactory type; if I try to browse to the service after adding the above to web.config I see: "Could not load type 'System.ServiceModel.DomainServices.Hosting.SoapXmlEndpointFactory' from assembly 'System.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'." If I inspect the assembly using Reflector, I see that it contains the DomainServiceEndpointFactory and PoxBinaryEndpointFactory types, but no SoapXmlEndpointFactory. Could someone please let me know what I'm doing wrong? I can't believe that it should be this hard to simply consume a WCF RIA service in something other than a Silverlight application! Yours, Duncan Bayne

    Read the article

  • How would you implement API key in WCF Data Service?

    - by rushonerok
    Is there a way to require an API key in the URL / or some other way of passing the service a private key in order to grant access to the data? I have this right now... using System; using System.Data.Services; using System.Data.Services.Common; using System.Collections.Generic; using System.Linq; using System.ServiceModel.Web; using Numina.Framework; using System.Web; using System.Configuration; [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class odata : DataService { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { HttpRequest Request = HttpContext.Current.Request; if(Request["apikey"] != ConfigurationManager.AppSettings["ApiKey"]) throw new DataServiceException("ApiKey needed"); base.OnStartProcessingRequest(args); } } ...This works but it's not perfect because you cannot get at the metadata and discover the service through the Add Service Reference explorer. I could check if $metadata is in the url but it seems like a hack. Is there a better way?

    Read the article

  • Long running stateful service in .NET

    - by Asaf R
    Hi, I need to create a service in .NET that maintains (inner) state in-memory, spawns multiple threads and is generally long-running. There are a lot options - Good-old Windows Service Windows Communication Services Windows Workflow Foundation I really don't know which to choose. Most of the functionality is in a library used by this service, so the service itself is rather simple. On one hand, it's important the service host is as close to "simply working" as possible, which excludes Windows Service. On the other hand, it's important that the service is not taken down by the host just because there's no external activity, which makes WCF kind o' "scary". As for WF, it's strongest selling point is the ability to create processes as, um..., workflows, which is something I don't need nor want. To sum it up, the plethora of Microsoft technologies got me a bit confused. I'd appreciate help regarding the pros and cons of each solution (or other's I've failed to mention) for the problem of a stateful, long running service in .NET Thanks, Asaf P.S., I'm using .NET 4. EDIT: What I mean by the host "simply working" is, for example, that the service I create be reactivated if it crashes. I guess the reason for this question is that I've created Windows Services in the past (I think it was in plain C++ with Win32 API), and I don't want to miss out on something simpler if there's is such as thing. Thanks for all the replies thus far! Asaf.

    Read the article

  • WCF ServiceContract and svcutil issue

    - by Valko
    Hi, I have a public interface auto-generated bu svcutil: [System.ServiceModel.ServiceContractAttribute(Namespace="...", ConfigurationName="...")] public interface MyInterface Then I have asmx web service inheriting it and working fine. I am trying ot convert it to WCF but when I instrument the service (in asmx.cs code behind) with ServiceContract: [ServiceContract(Namespace = "...")] public class MyService : MyInterface Also I have cerated .svc file and added the system.serviceModel setting in the config file. The goal is to migrate the asmx service to WCF service. I've got this error: The service class of type MyService both defines a ServiceContract and inherits a ServiceContract from type MyInterface. Contract inheritance can only be used among interface types. If a class is marked with ServiceContractAttribute, it must be the only type in the hierarchy with ServiceContractAttribute. The asmx service is still working fine. Only the .svc is giving me issues. My question is how to fix that. MyInterface is an interface so I do not see what the problem is and why I've got the error anyway. Note I do not want to change MyInterface, because it is autogenerated from svcutil from my wsdl schema and I do not want this interface to be edited manually. The whole idea is to have the service types automatically genereted from WSDL and to save my team development efforts with manual editing. Any help is appreciated.

    Read the article

  • Dynamic WCF base addresses in SharePoint

    - by Paul Bevis
    I'm attempting to host a WCF service in SharePoint. I have configured the service to be compatible with ASP.NET to allow me access to HttpContext and session information [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] public class MISDataService : IMISDataService { ... } And my configuration looks like this <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service name="MISDataService"> <endpoint address="" binding="webHttpBinding" contract="MISDataViews.IMISDataService" /> </service> </services> </system.serviceModel> Whilst this gives me access to the current HTTP context, the serivce is always hosted under the root domain, i.e. http://www.mydomain.com/_layouts/MISDataService.svc. In SharePoint the URL being accessed gives you specific context information about the current site via the SPContext class. So with the service hosted in a virtual directory, I would like it to be available on mulitple addresses e.g. http://www.mydomain.com/_layouts/MISDataService.svc http://www.mydomain.com/sites/site1/_layouts/MISDataService.svc http://www.mydomain.com/sites/site2/_layouts/MISDataService.svc so that the service can figure out what data to return based upon the current context. Is it possible to configure the endpoint address dynamically? Or is the only alternative to host the service in one location and then pass the "context" to it some how?

    Read the article

  • WCF Multiple contracts with duplicate method names

    - by haxelit
    Hello, I have a service with multiple contracts like so. [ServiceContract] public partial interface IBusinessFunctionDAO { [OperationContract] BusinessFunction GetBusinessFunction(Int32 businessFunctionRefID); [OperationContract] IEnumerable<Project> GetProjects(Int32 businessFunctionRefID); } [ServiceContract] public partial interface IBusinessUnitDAO { [OperationContract] BusinessUnit GetBusinessUnit(Int32 businessUnitRefID); [OperationContract] IEnumerable<Project> GetProjects(Int32 businessUnitRefID); } I then explicitly implemented each one of the interfaces like so. public class TrackingTool : IBusinessFunctionDAO, IBusinessUnitDAO { BusinessFunction IBusinessFunctionDAO.GetBusinessFunction(Int32 businessFunctionRefID) { // implementation } IEnumerable<Project> IBusinessFunctionDAO.GetProjects(Int32 businessFunctionRefID) { // implementation } BusinessUnit IBusinessUnitDAO.GetBusinessUnit(Int32 businessUnitRefID) { // implementation } IEnumerable<Project> IBusinessUnitDAO.GetProjects(Int32 businessUnitRefID) { // implementation } } As you can see I have two GetProjects(int) methods, but each one is implemented explicitly so this compiles just fine and is perfectly valid. The problem arises when I actually start this as a service. It gives me an error staying that TrackingTool already contains a definition GetProject. While it is true, it is part of a different service contract. Does WCF not distinguish between service contracts when generating the method names ? Is there a way to get it to distinguish between the service contracts ? My App.Config looks like this <service name="TrackingTool"> <endpoint address="BusinessUnit" contract="IBusinessUnitDAO" /> <endpoint address="BusinessFunction" contract="IBusinessFunctionDAO" /> </service> Any help would be appreciated. Thanks, Raul

    Read the article

  • WCF consumed as WebService adds a boolean parameter?

    - by Martín Marconcini
    I've created the default WCF Service in VS2008. It's called "Service1" public class Service1 : IService1 { public string GetData( int value ) { return string.Format("You entered: {0}", value); } public CompositeType GetDataUsingDataContract( CompositeType composite ) { if ( composite.BoolValue ) { composite.StringValue += "Suffix"; } return composite; } } It works fine, the interface is IService1: [ServiceContract] public interface IService1 { [OperationContract] string GetData( int value ); [OperationContract] CompositeType GetDataUsingDataContract( CompositeType composite ); // TODO: Add your service operations here } This is all by default; Visual Studio 2008 created all this. I then created a simple Winforms app to "test" this. I added the Service Reference to my the above mentioned service and it all works. I can instanciate and call myservice1.GetData(100); and I get the result. But I was told that this service will have to be consumed by a Winforms .NET 2.0 app via Web Services, so I proceeded to add the reference to a new Winforms .NET 2.0 application created from scratch (only one winform called form1). This time, when adding the "web reference", it added the typical "localhost" one belonging to webservices; the wizard saw the WCF Service (running on background) and added it. When I tried to consume this, I found out that the GetData(int) method, was now GetData(int, bool). Here's the code private void button1_Click( object sender, EventArgs e ) { localhost.Service1 s1 = new WindowsFormsApplication2.localhost.Service1(); Console.WriteLine(s1.GetData(100, false)); } Notice the false in the GetData call? I don't know what that parameter is or where did that come from, it is called "bool valueSpecified". Does anybody know where this is coming from? Anything else I should do to consume a WCF Service as a WebService from .NET 2.0? (winforms).

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • Exposing BL as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? if so, why ? Do you know anyone who had a similar experience ? Thank you !

    Read the article

  • Exposing BL as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? if so, why ? Do you know anyone who had a similar experience ?

    Read the article

  • WCF Communication Problem

    - by vincpa
    Two separate servers, one is an IIS7 web application trying to connect to a WCF service on the other server. The initial connect always fails, sometimes the second, after that, everything works normal. What could be the cause of this problem? The exception that gets thrown is EndpointNotFoundException Could not connect to net.tcp://192.168.0.83/MgrService/Manager.svc. The connection attempt lasted for a time span of 00:00:21.0289348. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.0.83:808.

    Read the article

  • Oracle Insurance Gets Innovative with Insurance Business Intelligence

    - by nicole.bruns(at)oracle.com
    Oracle Insurance announced yesterday the availability of Oracle Insurance Insight 7.0, an insurance-specific data warehouse and business intelligence (BI) system that transforms the traditional approach to BI by involving business users in the creation and maintenance."Rapid access to business intelligence is essential to compete and thrive in today's insurance industry," said Srini Venkatasantham, vice president, Product Strategy, Oracle Insurance. "The adaptive data modeling approach of Oracle Insurance Insight 7.0, combined with the insurance-specific data model, offers global insurance companies a faster, easier way to get the intelligence they need to make better-informed business decisions." New Features in Oracle Insurance 7.0 include:"Adaptive Data Modeling" via the new warehouse palette: Gives business users the power to configure lines of business via an easy-to-use warehouse palette tool. Oracle Insurance Insight then automatically creates data warehouse elements - such as line-specific database structures and extract-transform-load (ETL) processes -speeding up time-to-value for BI initiatives. Out-of-the-box insurance models or create-from-scratch option: Includes pre-built content and interfaces for six Property and Casualty (P&C) lines. Additionally, insurers can use the warehouse palette to deploy any and all P&C or General Insurance lines of business from scratch, helping insurers support operations in any country.Leverages Oracle technologies: In addition to Oracle Business Intelligence Enterprise Edition, the solution includes Oracle Database 11g as well as Oracle Data Integrator Enterprise Edition 11g, which delivers Extract, Load and Transform (E-L-T) architecture and eliminates the need for a separate transformation server. Additionally, the expanded Oracle technology infrastructure enables support for Oracle Exadata. Martina Conlon, a Principal with Novarica's Insurance practice, and author of Business Intelligence in Insurance: Current State, Challenges, and Expectations says, "The need for continued investment by insurers in business intelligence capabilities is widely understood, and the industry is acting. Arming the business intelligence implementation with predefined insurance specific content, and flexible and configurable technology will get these projects up and running faster."Learn moreTo see a demo of the Oracle Insurance Insight system, click hereTo read the press announcement, click here

    Read the article

  • SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012

    - by pinaldave
    Data Quality Services is very interesting enhancements in SQL Server 2012. My friend and SQL Server Expert Govind Kanshi have written an excellent article on this subject earlier on his blog. Yesterday I stumbled upon his blog one more time and decided to experiment myself with DQS. I have basic understanding of DQS and MDS so I knew I need to start with DQS Client. However, when I tried to find DQS Client I was not able to find it under SQL Server 2012 installation. I quickly realized that I needed to separately install the DQS client. You will find the DQS installer under SQL Server 2012 >> Data Quality Services directory. The pre-requisite of DQS is Master Data Services (MDS) and IIS. If you have not installed IIS, you can follow the simple steps and install IIS in your machine. Once the pre-requisites are installed, click on MDS installer once again and it will install DQS just fine. Be patient with the installer as it can take a bit longer time if your machine is low on configurations. Once the installation is over you will be able to expand SQL Server 2012 >> Data Quality Services directory and you will notice that it will have a new item called Data Quality Client.  Click on it and it will open the client. Well, in future blog post we will go over more details about DQS and detailed practical examples. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Data Quality Services

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • Social Analytics in your current data

    - by Dan McGrath
    By now everyone is aware of the massive boom in social-networking (Twitter, Facebook, LinkedIn) and obviously a big part of its business model revolves around being able to mine this data to create information that can be used to make money for someone. Gartner has identified 'Social Analytics' as one of the top 10 strategic technologies for 2011. Has anyone looked at their existing data structures to determine if they could extract a social graph and then perform further data mining against this? How does it fit in with your other strategic development strategies? What information are you trying to extract from the data? Take for example, a bank. They could conceivably determine a social graph through account relationships and transactions. Obviously there would be open edges on the graph where funds enter/leave the institute, but that shouldn't detract from the usefulness of the data. I'm looking for actual examples with the answers, as well as why/how they did it. References to other sites will be greatly appreciated. Note: I'm not at all referring to mining data out of actual social networks.

    Read the article

  • The Oldest Big Data Problem: Parsing Human Language

    - by dan.mcclary
    There's a new whitepaper up on Oracle Technology Network which details the use of Digital Reasoning Systems' Synthesys software on Oracle Big Data Appliance.  Digital Reasoning's approach is inherently "big data friendly," as it leverages multiple components of the Hadoop ecosystem.  Moreover, the paper addresses the oldest big data problem of them all: extracting knowledge from human text.   You can find the paper here.   From the Executive Summary: There is a wealth of information to be extracted from natural language, but that extraction is challenging. The volume of human language we generate constitutes a natural Big Data problem, while its complexity and nuance requires a particular expertise to model and mine. In this paper we illustrate the impressive combination of Oracle Big Data Appliance and Digital Reasoning Synthesys software. The combination of Synthesys and Big Data Appliance makes it possible to analyze tens of millions of documents in a matter of hours. Moreover, this powerful combination achieves four times greater throughput than conducting the equivalent analysis on a much larger cloud-deployed Hadoop cluster.

    Read the article

  • Uses of persistent data structures in non-functional languages

    - by Ray Toal
    Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are thread-safe. However, the reason that persistent data structures are thread-safe is that if one thread were to "add" an element to a persistent collection, the operation returns a new collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are not in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for "thread safety"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?

    Read the article

  • How to make custom WCF error handler return JSON response with non-OK http code?

    - by John
    I'm implementing a RESTful web service using WCF and the WebHttpBinding. Currently I'm working on the error handling logic, implementing a custom error handler (IErrorHandler); the aim is to have it catch any uncaught exceptions thrown by operations and then return a JSON error object (including say an error code and error message - e.g. { "errorCode": 123, "errorMessage": "bla" }) back to the browser user along with an an HTTP code such as BadRequest, InteralServerError or whatever (anything other than 'OK' really). Here is the code I am using inside the ProvideFault method of my error handler: fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var rmp = new HttpResponseMessageProperty(); rmp.StatusCode = System.Net.HttpStatusCode.InternalServerError; rmp.Headers.Add(HttpRequestHeader.ContentType, "application/json"); fault.Properties.Add(HttpResponseMessageProperty.Name, rmp); -- This returns with Content-Type: application/json, however the status code is 'OK' instead of 'InternalServerError'. fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var rmp = new HttpResponseMessageProperty(); rmp.StatusCode = System.Net.HttpStatusCode.InternalServerError; //rmp.Headers.Add(HttpRequestHeader.ContentType, "application/json"); fault.Properties.Add(HttpResponseMessageProperty.Name, rmp); -- This returns with the correct status code, however the content-type is now XML. fault = Message.CreateMessage(version, "", errorObject, new DataContractJsonSerializer(typeof(ErrorMessage))); var wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); var response = WebOperationContext.Current.OutgoingResponse; response.ContentType = "application/json"; response.StatusCode = HttpStatusCode.InternalServerError; -- This returns with the correct status code and the correct content-type! The problem is that the http body now has the text 'Failed to load source for: http://localhost:7000/bla..' instead of the actual JSON data.. Any ideas? I'm considering using the last approach and just sticking the JSON in the HTTP StatusMessage header field instead of in the body, but this doesn't seem quite as nice?

    Read the article

  • Getting WCF Services in a Silverlight solution to play nice on deployment

    - by brendonpage
    I have come across 2 issues with deploying WCF services in a Silverlight solution, admittedly the one is more of a hiccup, and only occurs if you take the easy way out and reference your services through visual studio. The First Issue This occurs when you deploy your WFC services to an IIS server. When browse to the services using your web browser, you are greeted with “This collection already contains an address with scheme http.  There can be at most one address per scheme in this collection.”. When you make a call to this service from your Silverlight application, you get the extremely helpful “NotFound” error, this error message can be found in the error property of the event arguments on the complete event handler for that call. As it did with me this will leave most people scratching their head, because the very same services work just fine on the ASP.NET Development Web Server and on my local IIS server. Now I’m no server/hosting/IIS expert so I did a bit of searching when I first encountered this issue. I found out this happens because IIS supports multiple address bindings per protocol (http/https/ftp … etc) per web site, but WCF only supports binding to one address per protocol. This causes a problem when the WCF service is hosted on a site with multiple address bindings, because IIS provides all of the bindings to the host factory when running the service. While this problem occurs mainly on shared hosting solutions, it is not limited to shared hosting, it just seems like all shared hosting providers setup sites on their servers with multiple address bindings. For interests sake I added functionality to the example project attached to this post to dump the addresses given to the WCF service by IIS into a log file. This was the output on the shared hosting solution I use: http://mydomain.co.za/Services/TestService.svc http://www.mydomain.co.za/Services/TestService.svc http://mydomain-co-za.win13.wadns.net/Services/TestService.svc http://win13/Services/TestService.svc As you can see all these addresses are for the http protocol, which is where it all goes wrong for WCF. Fixes for the First Issue There are a few ways to get around this. The first being the easiest, target .NET 4! Yes that's right in .NET 4 WCF services support multiple addresses per protocol. This functionality is enabled by an option, which is on by default if you create a new project, you will need to turn on if you are upgrading to .NET 4. To do this set the multipleSiteBindingsEnabled property of the serviceHostingEnviroment tag in the web.config file to true, as shown below: <system.serviceModel>     <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Beware this ONLY works in .NET 4, so if you don’t have a server with .NET 4 installed on that you can deploy to, you will need to employ one of the other work a rounds. The second option will work for .NET 3.5 & 4. For this option all you need to do is modify the web.config file and add baseAddressPrefixFilters to the serviceHostingEnviroment tag as shown below: <system.serviceModel>     <serviceHostingEnvironment>         <baseAddressPrefixFilters>              <add prefix="http://www.mydomain.co.za"/>         </baseAddressPrefixFilters>     </serviceHostingEnvironment> </system.serviceModel> These will be used to filter the list of base addresses that IIS provides to the host factory. When specifying these prefix filters be sure to specify filters which will only allow 1 result through, otherwise the entire exercise will be pointless. There is however a problem with this work a round, you are only allowed to specify 1 prefix filter per protocol. Which means you can’t add filters for all your environments, this will therefore add to the list of things to do before deploying or switching dev machines. The third option is the one I currently employ, it will work for .NET 3, 3.5 & 4, although it is not needed for .NET 4. For this option you create a custom host factory which inherits from the ServiceHostFactory class. In the implementation of the ServiceHostFactory you employ logic to figure out which of the base addresses, that are give by IIS, to use when creating the service host. The logic you use to do this is completely up to you, I have seen quite a few solutions that simply statically reference an index from the list of base addresses, this works for most situations but falls short in others. For instance, if the order of the base addresses where to change, it might end up returning an address that only resolves on the servers local network, like the last one in the example I gave at the beginning. Another instance, if a request comes in on a different protocol, like https, you will be creating the service host using an address which is on the incorrect protocol, like http. To reliably find the correct address to use, I use the address that the service was requested on. To accomplish this I use the HttpContext, which requires the service to operate with AspNetCompatibilityRequirements set on. If for some reason running you services with AspNetCompatibilityRequirements on isn’t an option, you can still use this method, you will just have to come up with your own logic for selecting the correct address. First you will need to enable AspNetCompatibilityRequirements for your hosting environment, to do this you will need to set it to true in the web.config file as shown below: <system.serviceModel>     <serviceHostingEnvironment AspNetCompatibilityRequirements="true" /> </system.serviceModel> You will then need to mark any services that are going to use the custom host factory, to allow AspNetCompatibilityRequirements, as shown below: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class TestService { } Now for the custom host factory, this is where the logic lives that selects the correct address to create service host with. The one i use is shown below: public class CustomHostFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { // // Compose a prefix filter based on the requested uri // string prefixFilter = HttpContext.Current.Request.Url.Scheme + "://" + HttpContext.Current.Request.Url.DnsSafeHost; if (!HttpContext.Current.Request.Url.IsDefaultPort) { prefixFilter += ":" + HttpContext.Current.Request.Url.Port.ToString() + "/"; } // // Find a base address that matches the prefix filter // foreach (Uri baseAddress in baseAddresses) { if (baseAddress.OriginalString.StartsWith(prefixFilter)) { return new ServiceHost(serviceType, baseAddress); } } // // Throw exception if no matching base address was found // throw new Exception("Custom Host Factory: No base address matching '" + prefixFilter + "' was found."); } } The most important line in the custom host factory is the one that returns a new service host. This has to return a service host that specifies only one base address per protocol. Since I filter by the address the request came on in, I only need to create the service host with one address, since this address will always be of the correct protocol. Now you have a custom host factory you have to tell your services to use it. To do this you view the markup of the service by right clicking on it in the solution explorer and choosing “View Markup”. Then you add/set the value of the Factory property to the full namespace path of you custom host factory, as shown below. And that is it done, the service will now use the specified custom host factory. The Second Issue As I mentioned earlier this issue is more of a hiccup, but I thought worthy of a mention so I included it. This issue only occurs when you add a service reference to a Silverlight project. Visual Studio will generate a lot of code for you, part of that generated code is the ServiceReferences.ClientConfig file. This file stores the endpoint configuration that is used when accessing your services using the generated proxy classes. Here is what that file looks like: <configuration>     <system.serviceModel>         <bindings>             <customBinding>                 <binding name="CustomBinding_TestService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>                 <binding name="CustomBinding_BrokenService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>             </customBinding>         </bindings>         <client>             <endpoint address="http://localhost:49347/services/TestService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_TestService"                 contract="TestService.TestService" name="CustomBinding_TestService" />             <endpoint address="http://localhost:49347/Services/BrokenService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_BrokenService"                 contract="BrokenService.BrokenService" name="CustomBinding_BrokenService" />         </client>     </system.serviceModel> </configuration> As you will notice the addresses for the end points are set to the addresses of the services you added the service references from, so unless you are adding the service references from your live services, you will have to change these addresses before you deploy. This is little more than an annoyance really, but it adds to the list of things to do before you can deploy, and if left unchecked that list can get out of control. Fix for the Second Issue The way you would usually access a service added this way is to create an instance of the proxy class like so: BrokenServiceClient proxy = new BrokenServiceClient(); Closer inspection of these generated proxy classes reveals that there are a few overloaded constructors, one of which allows you to specify the end point address to use when creating the proxy. From here all you have to do is come up with some logic that will provide you with the relative path to your services. Since my WCF services are usually hosted in the same project as my Silverlight app I use the class shown below: public class ServiceProxyHelper { /// <summary> /// Create a broken service proxy /// </summary> /// <returns>A broken service proxy</returns> public static BrokenServiceClient CreateBrokenServiceProxy() { Uri address = new Uri(Application.Current.Host.Source, "../Services/BrokenService.svc"); return new BrokenServiceClient("CustomBinding_BrokenService", address.AbsoluteUri); } } Then I will create an instance of the proxy class using my service helper class like so: BrokenServiceClient proxy = ServiceProxyHelper.CreateBrokenServiceProxy(); The way this works is “Application.Current.Host.Source” will return the URL to the ClientBin folder the Silverlight app is hosted in, the “../Services/BrokenService.svc” is then used as the relative path to the service from the ClientBin folder, combined by the Uri object this gives me the URL to my service. The “CustomBinding_BrokenService” is a reference to the end point configuration in the ServiceReferences.ClientConfig file. Yes this means you still need the ServiceReferences.ClientConfig file. All this is doing is using a different end point address than the one specified in the ServiceReferences.ClientConfig file, all the other settings form the ServiceReferences.ClientConfig file are still used when creating the proxy. I have uploaded an example project which covers the custom host factory solution from the first issue and everything from the second issue. I included the code to write a list of base addresses to a log file in my implementation of the custom host factory, this is not need for the custom host factory to function and can safely be removed. Download (WCFServicesDeploymentExample.zip)

    Read the article

  • Translate report data export from RUEI into HTML for import into OpenOffice Calc Spreadsheets

    - by [email protected]
    A common question of users is, How to import the data from the automated data export of Real User Experience Insight (RUEI) into tools for archiving, dashboarding or combination with other sets of data.XML is well-suited for such a translation via the companion Extensible Stylesheet Language Transformations (XSLT). Basically XSLT utilizes XSL, a template on what to read from your input XML data file and where to place it into the target document. The target document can be anything you like, i.e. XHTML, CSV, or even a OpenOffice Spreadsheet, etc. as long as it is a plain text format.XML 2 OpenOffice.org SpreadsheetFor the XSLT to work as an OpenOffice.org Calc Import Filter:How to add an XML Import Filter to OpenOffice CalcStart OpenOffice.org Calc andselect Tools > XML Filter SettingsNew...Fill in the details as follows:Filter name: RUEI Import filterApplication: OpenOffice.org Calc (.ods)Name of file type: Oracle Real User Experience InsightFile extension: xmlSwitch to the transformation tab and enter/select the following leaving the rest untouchedXSLT for import: ruei_report_data_import_filter.xslPlease see at the end of this blog post for a download of the referenced file.Select RUEI Import filter from list and Test XSLTClick on Browse to selectTransform file: export.php.xmlOpenOffice.org Calc will transform and load the XML file you retrieved from RUEI in a human-readable format.You can now select File > Open... and change the filetype to open your RUEI exports directly in OpenOffice.org Calc, just like any other a native Spreadsheet format.Files of type: Oracle Real User Experience Insight (*.xml)File name: export.php.xml XML 2 XHTMLMost XML-powered browsers provides for inherent XSL Transformation capabilities, you only have to reference the XSLT Stylesheet in the head of your XML file. Then open the file in your favourite Web Browser, Firefox, Opera, Safari or Internet Explorer alike.<?xml version="1.0" encoding="ISO-8859-1"?><!-- inserted line below --> <?xml-stylesheet type="text/xsl" href="ruei_report_data_export_2_xhtml.xsl"?><!-- inserted line above --><report>You can find a patched example export from RUEI plus the above referenced XSL-Stylesheets here: export.php.xml - Example report data export from RUEI ruei_report_data_export_2_xhtml.xsl - RUEI to XHTML XSL Transformation Stylesheetruei_report_data_import_filter.xsl - OpenOffice.org XML import filter for RUEI report export data If you would like to do things like this on the command line you can use either Xalan or xsltproc.The basic command syntax for xsltproc is very simple:xsltproc -o output.file stylesheet.xslt inputfile.xmlYou can use this with the above two stylesheets to translate RUEI Data Exports into XHTML and/or OpenOffice.org Calc ODS-Format. Or you could write your own XSLT to transform into Comma separated Value lists.Please let me know what you think or do with this information in the comments below.Kind regards,Stefan ThiemeReferences used:OpenOffice XML Filter - Create XSLT filters for import and export - http://user.services.openoffice.org/en/forum/viewtopic.php?f=45&t=3490SUN OpenOffice.org XML File Format 1.0 - http://xml.openoffice.org/xml_specification.pdf

    Read the article

  • SQL UserGroup Events & Service Broker

    - by NeilHambly
    I'm sure you are now aware of the SQL UserGroup events (both in London) on Wednesday 19th & Thrusday 20th evenings, If you have never been to one of the events before then I would highly reconmend attending one or both of them. Covering a wide range of subjects these meetings are an invaluable way to gain insights into various features from SQL experts (both presenters and attendees alike) frequently you will learn new insights and gain different perspectives on how to use those features...(read more)

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >