Search Results

Search found 74550 results on 2982 pages for 'wcf data service'.

Page 111/2982 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • how to recover data from my disk - I accidentally copied with dd an iso on it

    - by sijoune
    I wanted to create a bootable usb from an iso image and i accidentally put as the output of the dd, instead of my usb drive, one of my hard disks. The iso was 3,3 GB and my disk is 1TB! And it was almost full. Can i at least restore the data that has not been overwritten? Right now i can't even mount it. I get this error: Error mounting /dev/sdd1 at /media/main/UDF Volume: Command-line `mount -t "udf" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,iocharset=utf8,umask=0077" "/dev/sdd1" "/media/main/UDF Volume"' exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Also since i know which filesystem my disk used if i reformat it to this filesystem is there any chance i can mount it and retrieve the rest of the files?

    Read the article

  • WCF: Server Not Found - from trace Empty Message when run async but works fine from console app?

    - by MrTortoise
    Todays cause of hair loss has been the following scenario: I have a service that takes 2 strings and returns another. This service uses basicHttpBinding <basicHttpBinding> <binding name="basicHttpNoSec"> <security mode="None" /> </binding> </basicHttpBinding> Anyway, it works fine from a console test app. I have a silverlight app sat on top which implements another basicHttpBinding service that simply reuses the contract in the other service and the silverlight App uses this service. I have a console app that confirms that this service is working and set up with basichttpbinding. I have all the clientAccessPolicy stuff in place. when I run the silverlight app the difference is that it runs everything async ... as such the only message i directly get back rom wcf is server not found. When i enable tracing I dig down to this message - as I know the methods work and the parameteres i pass in will return a valid string i am really puzzled at to what the cause is. any help much appreciated. <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent"> <System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system"> <EventID>131075</EventID> <Type>3</Type> <SubType Name="Error">0</SubType> <Level>2</Level> <TimeCreated SystemTime="2010-06-07T14:17:40.6639249Z" /> <Source Name="System.ServiceModel" /> <Correlation ActivityID="{8ea9530e-12f4-4a82-9c26-dd2e23264c3c}" /> <Execution ProcessName="aspnet_wp" ProcessID="4616" ThreadID="6" /> <Channel /> <Computer>5JC2Y2J</Computer> </System> <ApplicationData> <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Error"> <TraceIdentifier>http://msdn.microsoft.com/en-GB/library/System.ServiceModel.Diagnostics.ThrowingException.aspx</TraceIdentifier> <Description>Throwing an exception.</Description> <AppDomain>/LM/w3svc/1/ROOT/CopSilverlight.Web-1-129203938565564172</AppDomain> <Exception> <ExceptionType>System.ServiceModel.ProtocolException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>There is a problem with the XML that was received from the network. See inner exception for more details.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString>System.ServiceModel.ProtocolException: There is a problem with the XML that was received from the network. See inner exception for more details. ---&gt; System.Xml.XmlException: The body of the message cannot be read because it is empty. --- End of inner exception stack trace ---</ExceptionString> <InnerException> <ExceptionType>System.Xml.XmlException, System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>The body of the message cannot be read because it is empty.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString>System.Xml.XmlException: The body of the message cannot be read because it is empty.</ExceptionString> </InnerException> </Exception> </TraceRecord> </DataItem> </TraceData> </ApplicationData> </E2ETraceEvent>

    Read the article

  • What is the difference in WCF when using KnownType and ServiceKnownType?

    - by Paul Speranza
    I have a service that returns an array of animal but the list can contain cats, dogs, etc, which all extend animal. I know I need to use either the KnownType or ServiceKnownType attribute, and on the entity class or the service class, respectively. What is the difference between the 2 attributes? I prefer the ServiceKnownType because it is applied on the service, exactly where it is needed and called for, as opposed to KnownType which is applied on my entity. To me applying it on the entity class means knowing too far ahead how my entity class is being used. For now I have it on my entity and it works like a charm, but I am looking for guidance here as to best practices and usefullness. Thanks, Paul Speranza

    Read the article

  • How do I modify my WCF service to work with ASP.NET pages?

    - by Scott
    I created a WCF service (.NET 3.5) that grabs data from a db and returns a list of objects. It works just fine. I tested it using the WCFTestClient application and got the desired results. Now, I tried to create an ASP.NET web application and consume the service. After enabling <serviceDebug includeExceptionDetailInFaults="true"/> in the config file, the error message is "Object reference not set to an instance of an object." How do I modify the service to work with ASP.NET? Thanks!

    Read the article

  • WCF 4: Fileless Activation Fails On XP (IIS 5) that has SSL port enabled.

    - by Richard Collette
    I have a service being hosted in IIS on XP via fileless activation. The service starts fine when there is no SSL port enabled for IIS but when the SSL port is enabled, I get the error message: System.ServiceModel.ServiceActivationException: The service '/SkillsPrototype.Web/services/Linkage.svc' cannot be activated due to an exception during compilation. The exception message is: A binding instance has already been associated to listen URI 'http://rcollet.hsb-corp.hsb.com/SkillsPrototype.Web/Services/Linkage.svc'. If two endpoints want to share the same ListenUri, they must also share the same binding object instance. The two conflicting endpoints were either specified in AddServiceEndpoint() calls, in a config file, or a combination of AddServiceEndpoint() and config. . ---> System.InvalidOperationException: A binding instance has already been associated to listen URI 'http://rcollet.hsb-corp.hsb.com/SkillsPrototype.Web/Services/Linkage.svc'. If two endpoints want to share the same ListenUri, they must also share the same binding object instance. The two conflicting endpoints were either specified in AddServiceEndpoint() calls, in a config file, or a combination of AddServiceEndpoint() and config. My service model configuration is <system.serviceModel> <diagnostics wmiProviderEnabled="true"> <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="3000"/> </diagnostics> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <webHttpBinding> <binding> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </webHttpBinding> </bindings> <protocolMapping> </protocolMapping> <services> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> <serviceActivations> <clear/> <add factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" service="SkillsPrototype.ServiceModel.Linkage" relativeAddress="~/Services/Linkage.svc"/> </serviceActivations> </serviceHostingEnvironment> </system.serviceModel> When you look in the svclog file, there two base addresses that are returned when SSL is enabled, one for http and one for https. I suspect that this is part of the issue but I am not sure how to resolve it. <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent"> <System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system"> <EventID>524333</EventID> <Type>3</Type> <SubType Name="Information">0</SubType> <Level>8</Level> <TimeCreated SystemTime="2010-06-16T17:40:55.8168605Z" /> <Source Name="System.ServiceModel" /> <Correlation ActivityID="{95927f9a-fa90-46f4-af8b-721322a87aaa}" /> <Execution ProcessName="aspnet_wp" ProcessID="1888" ThreadID="5" /> <Channel/> <Computer>RCOLLET</Computer> </System> <ApplicationData> <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Information"> <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.ServiceHostBaseAddresses.aspx</TraceIdentifier> <Description>ServiceHost base addresses.</Description> <AppDomain>/LM/w3svc/1/ROOT/SkillsPrototype.Web-1-129211836532542949</AppDomain> <Source>System.ServiceModel.WebScriptServiceHost/49153359</Source> <ExtendedData xmlns="http://schemas.microsoft.com/2006/08/ServiceModel/CollectionTraceRecord"> <BaseAddresses> <Address>http://rcollet.hsb-corp.hsb.com/SkillsPrototype.Web/Services/Linkage.svc</Address> <Address>https://rcollet.hsb-corp.hsb.com/SkillsPrototype.Web/Services/Linkage.svc</Address> </BaseAddresses> </ExtendedData> </TraceRecord> </DataItem> </TraceData> </ApplicationData> </E2ETraceEvent> I can't post the full service log due to character limits on the post.

    Read the article

  • How do I construct a request for a WCF http post call?

    - by James Hay
    I have a really simple service that I'm messing about with defined by: [OperationContract] [WebInvoke(UriTemplate = "Review/{val}", RequestFormat = WebMessageFormat.Xml, Method = "POST", BodyStyle=WebMessageBodyStyle.Bare)] void SubmitReview(string val, UserReview review); UserReview is, at the moment, a class with no properties. All very basic. When I try and test this in Fiddler I get a bad request status (400) message. I'm trying to call the service using the details: POST http://127.0.0.1:85/Service.svc/Review/hello Headers User-Agent: Fiddler Content-Type: application/xml Host: 127.0.0.1:85 Content-Length: 25 Body <UserReview></UserReview> I would think i'm missing something fairly obvious. Any pointers?

    Read the article

  • Where best to instantiate and close a Silverlight-enabled WCF Service from the Silverlight app?

    - by Yttrium
    When using a Silverlight-enabled WCF service, where is the best place to instantiate the service and to call the CloseAsync() method? Should you say, instantiate an instance each time you need to make a call to the service, or is it better to just instantiate an instance as a variable of the UserControl that will be making the calls? Then, where is it better to call the CloseAsync method? Should you call it in each of the "someServiceCall_completed" event methods? Or, if created as a variable of the UserControl class, is there a single place to call it? Like a Dispose method, or something equivalent for the UserControl class. Thanks, Jeff

    Read the article

  • SQLAuthority News – Download SQL Azure Labs Codename “Data Explorer” Client

    - by pinaldave
    Microsoft SQL Azure labs has recently released Data Explorer client. I was looking forward to visualizing tool for quite a while and I am delighted to see this tool. I will be trying out this tool in coming week and will post here my experience. I have listed few of the resources which are related to Data Explorer at the end. Please let me know if I have missed any and I will add the same. With “Data Explorer” you can: Identify the data you care about from the sources you work with (e.g. Excel spreadsheets, files, SQL Server databases). Discover relevant data and services via automatic recommendations from the Windows Azure Marketplace. Enrich your data by combining it and visualizing the results. Collaborate with your colleagues to refine the data. Publish the results to share them with others or power solutions. The Data Explorer Client package contains the Data Explorer workspace as well as an Office plugin that integrates Data Explorer into Excel. Resources: Download Data Explorer Data Explorer Blog Desktop Client Video of  Contoso Bikes and Frozen Yogurt (Data Explorer) Please note that this is not the final release of the product. Please do not attempt this on production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Azure, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • Data and Secularism

    - by kaleidoscope
    Ever since we’ve been using Data we’ve been religious. Religious about the way we represent it and equally religious about the way we access it. Be it plain old SQL, DAO, ADO, ADO.Net and I am just referring to religions in MSFT world. A peek outside and I’d need a separate book to list out the Data faiths. Various application areas in networked computing are converging under the HTTP umbrella with a plausible transition to purist HTTP and in turn REST fuelled by the Web2.0 storm. It was time the Data access faiths also gave up the religious silos wrapped around our long worshipped data publishing and access methods. OData is the secular solution we have at hand today. It is an open protocol for sharing data. It can be exposed via REST. It is Open as in the Microsoft Open Specification Promise. This allows virtually everyone to build Data Services for any runtime. OData is one of the key standards for Data publishing/subscribing on Microsoft Codename Dallas. For us .Netters OData data sources can be exposed/consumed via WCF Data Services and the process is very simple, elegant and intuitive. Applications exposing OData Services Sharepoint 2010 IBM Web Sphere Microsoft SQL Azure Windows Azure Table Storage SQL Server Reporting Services   Live OData Services Netflix Open Science Data Initiative Open Government Data Initiatives Northwind database exposed as OData Service and many others Some may prefer to call it commoditization of data, unification of data access strategies or any other sweet name. I for one will stick to my secular definition. :) Technorati Tags: Sarang,OData,MOSP

    Read the article

  • Silverlight TV 18: WCF RIA Services Validation

    Just prior to MIX10, Nikhil Kothari appears on the show to demonstrate some of the key advantages around validation when using WCF RIA Services. He demonstrates how to use a Domain Service to expose your domain model and how to create a custom service method to further filter your data server side. Nikhil also shows how the Domain Services generates validation rules using the database attributes such as required fields or maximum string lengths. Other topics Nikhil covers: Domain service generated...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • How to migrate user settings and data to new machine?

    - by torbengb
    I'm new to Ubuntu and recently started using it on my PC. I'm going to replace that PC with a new machine. I want to transfer my data and settings to the nettop. What aspects should I consider? Obviously I want to move my data over. What things am I missing if I only copy the entire home folder? This is a home pc (not corporate) so user rights and other security issues are not a concern, except that the files should be accessible on the new machine! Please take into account that the new machine is a nettop that doesn't have an optical drive and doesn't allow me to hook the old SATA disk into it, so any data transfer must be handled via home network (I can have both the old and the new machine turned on and connected to the home LAN) and I have an USB thumbdrive with limited capacity (2GB). This sounds like it might limit the general applicability, but it would in fact make it more general. I'll make this a wiki topic because there could be several "right" answers. Update: Or so I thought. I don't see a choice for that.

    Read the article

  • WCF web service with Neural Network

    - by Gary Frank
    I am developing a web service that performs object recognition. It will be available for testing as soon as enough code has been developed, and then officially when it is finished. It is based on a radically new type of artificial neural network that I designed. Its goal is to recognize any type of object within an image. Besides the WCF web service, the project will also create a website to test and demonstrate the web service. Here is a link with more information. http://www.indiegogo.com/VOR

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >