Search Results

Search found 18142 results on 726 pages for 'wcf configuration'.

Page 78/726 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Serious problem with WCF, GridViews, Callbacks and ExecuteReaders exceptions.

    - by barjed
    Hi, I have this problem that is driving me insane. I have a project to deliver before Thursday. Basically an app consiting of three components that communicate with each other in WCF. I have one console app and one Windows Forms app. The console app is a server that's connected to the database. You can add records to it via the Windows Forms client that connectes with the server through the WCF. The code for the client: namespace BankAdministratorClient { [CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Single, UseSynchronizationContext = false)] public partial class Form1 : Form, BankServverReference.BankServerCallback { private BankServverReference.BankServerClient server = null; private SynchronizationContext interfaceContext = null; public Form1() { InitializeComponent(); interfaceContext = SynchronizationContext.Current; server = new BankServverReference.BankServerClient(new InstanceContext(this), "TcpBinding"); server.Open(); server.Subscribe(); refreshGridView(""); } public void refreshClients(string s) { SendOrPostCallback callback = delegate(object state) { refreshGridView(s); }; interfaceContext.Post(callback, s); } public void refreshGridView(string s) { try { userGrid.DataSource = server.refreshDatabaseConnection().Tables[0]; } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } private void buttonAdd_Click(object sender, EventArgs e) { server.addNewAccount(Int32.Parse(inputPIN.Text), Int32.Parse(inputBalance.Text)); } private void Form1_FormClosing(object sender, FormClosingEventArgs e) { try { server.Unsubscribe(); server.Close(); }catch{} } } } The code for the server: namespace SSRfinal_tcp { class Program { static void Main(string[] args) { Console.WriteLine(MessageHandler.dataStamp("The server is starting up")); using (ServiceHost server = new ServiceHost(typeof(BankServer))) { server.Open(); Console.WriteLine(MessageHandler.dataStamp("The server is running")); Console.ReadKey(); } } } [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, InstanceContextMode = InstanceContextMode.PerCall, IncludeExceptionDetailInFaults = true)] public class BankServer : IBankServerService { private static DatabaseLINQConnectionDataContext database = new DatabaseLINQConnectionDataContext(); private static List<IBankServerServiceCallback> subscribers = new List<IBankServerServiceCallback>(); public void Subscribe() { try { IBankServerServiceCallback callback = OperationContext.Current.GetCallbackChannel<IBankServerServiceCallback>(); if (!subscribers.Contains(callback)) subscribers.Add(callback); Console.WriteLine(MessageHandler.dataStamp("A new Bank Administrator has connected")); } catch { Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has failed to connect")); } } public void Unsubscribe() { try { IBankServerServiceCallback callback = OperationContext.Current.GetCallbackChannel<IBankServerServiceCallback>(); if (subscribers.Contains(callback)) subscribers.Remove(callback); Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has been signed out from the connection list")); } catch { Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has failed to sign out from the connection list")); } } public DataSet refreshDatabaseConnection() { var q = from a in database.GetTable<Account>() select a; DataTable dt = q.toTable(rec => new object[] { q }); DataSet data = new DataSet(); data.Tables.Add(dt); Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has requested a database data listing refresh")); return data; } public void addNewAccount(int pin, int balance) { Account acc = new Account() { PIN = pin, Balance = balance, IsApproved = false }; database.Accounts.InsertOnSubmit(acc); database.SubmitChanges(); database.addNewAccount(pin, balance, false); subscribers.ForEach(delegate(IBankServerServiceCallback callback) { callback.refreshClients("New operation is pending approval."); }); } } } This is really simple and it works for a single window. However, when you open multiple instances of the client window and try to add a new record, the windows that is performing the insert operation crashes with the ExecuteReader error and the " requires an open and available connection. the connection's current state is connecting" bla bla stuff. I have no idea what's going on. Please advise.

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

  • How can I generate a client proxy for a WCF service with an HTTPS endpoint?

    - by ng5000
    Might be the same issue as this previuos question: WCF Proxy but not sure... I have an HTTPS service connfigured to use transport security and, I hope, Windows credentials. The service is only accessed internally (i.e. within the intranet). The configuration is as follows: <configuration> <system.serviceModel> <services> <service name="WCFTest.CalculatorService" behaviorConfiguration="WCFTest.CalculatorBehavior"> <host> <baseAddresses> <add baseAddress = "https://localhost:8000/WCFTest/CalculatorService/" /> </baseAddresses> </host> <endpoint address ="basicHttpEP" binding="basicHttpBinding" contract="WCFTest.ICalculatorService" bindingConfiguration="basicHttpBindingConfig"/> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <bindings> <basicHttpBinding> <binding name="basicHttpBindingConfig"> <security mode="Transport"> <transport clientCredentialType = "Windows"/> </security> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="WCFTest.CalculatorBehavior"> <serviceAuthorization impersonateCallerForAllOperations="false" principalPermissionMode="UseWindowsGroups" /> <serviceCredentials > <windowsAuthentication allowAnonymousLogons="false" includeWindowsGroups="true" /> </serviceCredentials> <serviceMetadata httpsGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> When I run the service I can't see the service in IE. I get a "this page can not be displayed" error. If I try and create a client in VS2008 via the "add service reference" wizard I get this error: There was an error downloading 'https://localhost:8000/WCFTest/CalculatorService/'. There was an error downloading 'https://localhost:8000/WCFTest/CalculatorService/'. The underlying connection was closed: An unexpected error occurred on a send. Authentication failed because the remote party has closed the transport stream. Metadata contains a reference that cannot be resolved: 'https://localhost:8000/WCFTest/CalculatorService/'. An error occurred while making the HTTP request to https://localhost:8000/WCFTest/CalculatorService/. This could be due to the fact that the server certificate is not configured properly with HTTP.SYS in the HTTPS case. This could also be caused by a mismatch of the security binding between the client and the server. The underlying connection was closed: An unexpected error occurred on a send. Authentication failed because the remote party has closed the transport stream. If the service is defined in the current solution, try building the solution and adding the service reference again. I think I'm missing some fundamental basics here. Do I need to set up some certificates? Or should it all just work as it seems to do when I use NetTcpBinding? Thanks

    Read the article

  • Fix: WCF - The type provided as the Service attribute value in the ServiceHost directive could not

    I wanted to expose some raw data to users in my current ASP.NET 3.5 web site project. I created a subdirectory called datafeeds and added a WCF Data Service. I wired the dataservice up to the Entity Framework class and, on running the ItemDataService...(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Utilisez WCF Data Services 1.5 avec Silverlight, par Benjamin Roux

    Citation: Cet article vous présentera comment utiliser Silverlight et WCF Data Services 1.5. Premièrement, pourquoi utiliser Data Services 1.5 ? Tout simplement parce que l'intégration avec Silverlight est grandement améliorée (INotifyPropertyChanged et ObservableCollection, Two-way binding.). c'est par ici N'hésitez pas à laisser vos commentaires ici même

    Read the article

  • Tests unitaires d'un DomainService WCF RIA : IDomainServiceFactory, un tutoriel de Kyle McClellan, traduit par Deepin Prayag

    Citation: Étant donné que WCF RIA Services emploie un « pipeline pattern » pour invoquer vos opérations DomainService, il n'est pas toujours évident de savoir comment les tester. Dans cette série d'articles nous allons voir un petit DomainService et comment le tester. Entre autres nous allons voir comment implémenter une IDomainServiceFactory personnalisée, comment implémenter le pattern Repository, et comment utiliser la DomainServiceTestHos...

    Read the article

  • BenkoTips Live and On Demand: Visual Studio 2010, Silverlight 4, and WCF (Level 200)

    In this webcast, we explore what's new and possible with Windows Communication Foundation (WCF) RIA Services and your Microsoft Silverlight application. We show how you can create an entity model and then expose it to your client application and how to build a compelling interface using the data-binding features built into Microsoft Visual Studio 2010....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Tests unitaires d'un DomainService WCF RIA : DomainServiceTestHost, un tutoriel de Kyle McClellan, traduit par Deepin Prayag

    Citation: Étant donné que WCF RIA Services emploie un « pipeline pattern » pour invoquer vos opérations DomainService, il n'est pas toujours évident de savoir comment les tester. Dans cette série d'articles nous allons voir un petit DomainService et comment le tester. Entre autres nous allons voir comment implémenter une IDomainServiceFactory personnalisée, comment implémenter le pattern Repository, et comment utiliser la DomainServiceTestHos...

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • Silverlight 4 Tools + WCF RIA Services Released

    Microsoft has released the final versions of the Silverlight 4 Tools along with WCF RIA Services and the Silverlight Toolkit. Check Tim Heuers blog for all the info. var addthis_pub="guybarrette"; ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WCF HttpClient Error calling a RESTful WCF Service - Cannot write more bytes to the buffer than the configured maximum buffer size: 65536

    - by Justin Hoffman
    Using the HttpClient API from wcf.codeplex.com, you may encounter this error if respones are too large.   Cannot write more bytes to the buffer than the configured maximum buffer size: 65536 In order to increase the size of the Response Buffer, just increase the MaxReseponseContentBufferSize as shown below. Increase it to something larger than the default: 65536 depending on your response sizes. var client = new HttpClient { MaxResponseContentBufferSize = 196608, BaseAddress = new Uri("http://myservice/service1/") };

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 1: Overview

    - by Your DisplayName here!
    A lot has been written already about passive federation and integration of WIF and ADFS 2 into web apps. The whole active/WS-Trust feature area is much less documented or covered in articles and blogs. Over the next few posts I will try to compile all relevant information about the above topics – but let’s start with an overview. ADFS 2 has a number of endpoints under the /services/trust base address that implement the WS-Trust protocol. They are grouped by the WS-Trust version they support (/13 and /2005), the client credential type (/windows*, /username*, /certificate*) and the security mode (*transport, *mixed and message). You can see the endpoints in the MMC console under the Service/Endpoints page. So in other words, you use one of these endpoints (which exactly depends on your configuration / system setup) to request tokens from ADFS 2. The bindings behind the endpoints are more or less standard WCF bindings, but with SecureConversation (establishSecurityContext) disabled. That means that whenever you need to programmatically talk to these endpoints – you can (easily) create client bindings that are compatible. Another option is to use the special bindings that come with WIF (in the Microsoft.IdentityModel.Protocols.WSTrust.Bindings namespace). They are already pre-configured to be compatible with the ADFS endpoints. The downside of these bindings is, that you can’t use them in configuration. That’s definitely a feature request of mine for the next version of WIF. The next important piece of information is the so called Federation Service Identifier. This is the value that you (at least by default) have to use as a realm/appliesTo whenever you are requesting a token for ADFS (e.g. in  IdP –> RSTS scenario). Or (even more) technically speaking, ADFS 2 checks for this value in the audience URI restriction in SAML tokens. You can get to this value by clicking the “Edit Federation Service Properties” in the MMC when the Service tree-node is selected. OK – I will come back to this basic information in the following posts. Basically I want to go through the following scenarios: ADFS in the IdP role ADFS in the R-STS role (with a chained claims provider) Using the WCF bindings for automatic token issuance Using WSTrustChannelFactory for manual token handling Stay tuned…

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Silverlight and WCF caching

    - by subodhnpushpak
    There are scenarios where Silverlight client calls WCF (or REST) service for data. Now, if the data is cached on the WCF layer, the calls can take considerable resources at the server if NOT cached. Keeping that in mind along with the fact that cache is an cross-cutting aspect, and therefore it should be as easy as possible to put Cache wherever required. The good thing about the solution is that it caches based on the inputs. The input can be basic type of any complex type. If input changes the data is fetched and then cached for further used. If same input is provided again, data id fetched from the cache. The cache logic itself is implemented as PostSharp aspect, and it is as easy as putting an attribute over service call to switch on cache. Notice how clean the code is:        [OperationContract]       [CacheOnArgs(typeof(int))] // based on actual value of cache        public string DoWork(int value)        {            return string.Format("You entered: {0} @ cached time {1}", value, System.DateTime.Now);        } The cache is implemented as POST Sharp as below 1: public override void OnInvocation(MethodInvocationEventArgs eventArgs) 2: { 3: try 4: { 5: object value = new object(); 6: object[] args = eventArgs.GetArgumentArray(); 7: if (args != null || args.Count() > 0) 8: { 9:   10: string key = string.Format("{0}_{1}", eventArgs.Method.Name, XMLUtility<object>.GetDataContractXml(args[0], null));// Compute the cache key (details omitted). 11:   12: 13: value = GetFromCache(key); 14: if (value == null) 15: { 16: eventArgs.Proceed(); 17: value = XMLUtility<object>.GetDataContractXml(eventArgs.ReturnValue, null); 18: value = eventArgs.ReturnValue; 19: AddToCache(key, value); 20: return; 21: } 22:   23:   24: Log(string.Format("Data returned from Cache {0}",value)); 25: eventArgs.ReturnValue = value; 26: } 27: } 28: catch (Exception ex) 29: { 30: //ApplicationLogger.LogException(ex.Message, Source.UtilityService); 31: } 32: } 33:   34: private object GetFromCache(string inputKey) { if (ServerConfig.CachingEnabled) { return WCFCache.Current[inputKey]; } return null; }private void AddToCache(string inputKey,object outputValue) 35: { 36: if (ServerConfig.CachingEnabled) 37: { 38: if (WCFCache.Current.CachedItemsNumber < ServerConfig.NumberOfCachedItems) 39: { 40: if (ServerConfig.SlidingExpirationTime <= 0 || ServerConfig.SlidingExpirationTime == int.MaxValue) 41: { 42: WCFCache.Current[inputKey] = outputValue; 43: } 44: else 45: { 46: WCFCache.Current.Insert(inputKey, outputValue, new TimeSpan(0, 0, ServerConfig.SlidingExpirationTime), true); 47:   48: // _bw.DoWork += bw_DoWork; 49: //string arg = string.Format("{0}|{1}", inputKey,outputValue); 50: //_bw.RunWorkerAsync(inputKey ); 51: } 52: } 53: } 54: }     The cache class can be extended to support Velocity / memcahe / Nache. the attribute can be used over REST services as well. Hope the above helps. Here is the code base for the same.   Please do provide your inputs / comments.

    Read the article

  • ResolveUrl() from WCF service

    - by Michael Freidgeim
    I wanted to ResolveUrl() from WCF service and foundhttp://www.west-wind.com/weblog/posts/2007/Sep/18/ResolveUrl-without-Page .However the function assumes that the call is synchronous, in asynchronous call (e.g called from TPL task) HttpContext.Current==null.I had to split my asynchronous method into two-a long asynchronous one, invoked as task and generating relative URL and a post-task, that is calling wwWebUtils.ResolveServerUrl(relativeUrL)The http://www.codeproject.com/Articles/205425/ASP-NET-ResolveUrl-Without-Page  article suggests to useSystem.Web.VirtualPathUtility.ToAbsolute("~/default.aspx");but i expect, it wouldn’t work from asynchronous thread as well.

    Read the article

  • How to call a WCF service using ksoap2 on android?

    - by Qing
    Hi all, Here is my code import org.ksoap2.*; import org.ksoap2.serialization.*; import org.ksoap2.transport.*; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class ksop2test extends Activity { /** Called when the activity is first created. */ private static final String METHOD_NAME = "SayHello"; // private static final String METHOD_NAME = "HelloWorld"; private static final String NAMESPACE = "http://tempuri.org"; // private static final String NAMESPACE = "http://tempuri.org"; private static final String URL = "http://192.168.0.2:8080/HelloWCF/Service1.svc"; // private static final String URL = "http://192.168.0.2:8080/webservice1/Service1.asmx"; final String SOAP_ACTION = "http://tempuri.org/IService1/SayHello"; // final String SOAP_ACTION = "http://tempuri.org/HelloWorld"; TextView tv; StringBuilder sb; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); tv = new TextView(this); sb = new StringBuilder(); call(); tv.setText(sb.toString()); setContentView(tv); } public void call() { try { SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); request.addProperty("name", "Qing"); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope( SoapEnvelope.VER11); envelope.dotNet = true; envelope.setOutputSoapObject(request); HttpTransportSE androidHttpTransport = new HttpTransportSE(URL); androidHttpTransport.call(SOAP_ACTION, envelope); sb.append(envelope.toString() + "\n");//cannot get the xml request send SoapPrimitive result = (SoapPrimitive)envelope.getResponse(); //to get the data String resultData = result.toString(); // 0 is the first object of data sb.append(resultData + "\n"); } catch (Exception e) { sb.append("Error:\n" + e.getMessage() + "\n"); } } } I can successfully access .asmx service, but when I try to call a wcf service the virtual machine said : Error: expected:END_TAG{http://schemas.xmlsoap.org/soap/envelope/}Body(position:END_TAG@1:712 in java.io.InputStreamReader@43ba6798 How to print what the request send? Here is the wcf wsdl: <wsdl:definitions name="Service1" targetNamespace="http://tempuri.org/"> - <wsdl:types> - <xsd:schema targetNamespace="http://tempuri.org/Imports"> <xsd:import schemaLocation="http://para-bj.para.local:8080/HelloWCF/Service1.svc?xsd=xsd0" namespace="http://tempuri.org/"/> <xsd:import schemaLocation="http://para-bj.para.local:8080/HelloWCF/Service1.svc?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/"/> </xsd:schema> </wsdl:types> - <wsdl:message name="IService1_SayHello_InputMessage"> <wsdl:part name="parameters" element="tns:SayHello"/> </wsdl:message> - <wsdl:message name="IService1_SayHello_OutputMessage"> <wsdl:part name="parameters" element="tns:SayHelloResponse"/> </wsdl:message> - <wsdl:portType name="IService1"> - <wsdl:operation name="SayHello"> <wsdl:input wsaw:Action="http://tempuri.org/IService1/SayHello" message="tns:IService1_SayHello_InputMessage"/> <wsdl:output wsaw:Action="http://tempuri.org/IService1/SayHelloResponse" message="tns:IService1_SayHello_OutputMessage"/> </wsdl:operation> </wsdl:portType> - <wsdl:binding name="BasicHttpBinding_IService1" type="tns:IService1"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http"/> - <wsdl:operation name="SayHello"> <soap:operation soapAction="http://tempuri.org/IService1/SayHello" style="document"/> - <wsdl:input> <soap:body use="literal"/> </wsdl:input> - <wsdl:output> <soap:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> - <wsdl:service name="Service1"> - <wsdl:port name="BasicHttpBinding_IService1" binding="tns:BasicHttpBinding_IService1"> <soap:address location="http://para-bj.para.local:8080/HelloWCF/Service1.svc"/> </wsdl:port> </wsdl:service> </wsdl:definitions> It uses in tag and the asmx uses in tag what's the difference? Thanks. -Qing

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >