Search Results

Search found 27858 results on 1115 pages for 'managed service provider'.

Page 188/1115 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • SOA Implementation Challenges

    Why do companies think that if they put up a web service that they are doing Service-Oriented Architecture (SOA)? Unfortunately, the IT and business world love to run on the latest hype or buzz words of which very few even understand the meaning. One of the largest issues companies have today as they consider going down the path of SOA, is the lack of knowledge regarding the architectural style and the over usage of the term SOA. So how do we solve this issue?I am sure most of you are thinking by now that you know what SOA is because you developed a few web services.  Isn’t that SOA, right? No, that is not SOA, but instead Just Another Web Service (JAWS). For us to better understand what SOA is let’s look at a few definitions.Douglas K. Bary defines service-oriented architecture as a collection of services. These services are enabled to communicate with each other in order to pass data or coordinating some activity with other services.If you look at this definition closely you will notice that Bary states that services communicate with each other. Let us compare this statement with my first statement regarding companies that claim to be doing SOA when they have just a collection of web services. In order for these web services to for an SOA application they need to be interdependent on one another forming some sort of architectural hierarchy. Just because a company has a few web services does not mean that they are all interconnected.SearchSOA from TechTarget.com states that SOA defines how two computing entities work collectively to enable one entity to perform a unit of work on behalf of another. Once again, just because a company has a few web services does not guarantee that they are even working together let alone if they are performing work for each other.SearchSOA also points out service interactions should be self-contained and loosely-coupled so that all interactions operate independent of each other.Of all the definitions regarding SOA Thomas Erl’s seems to shed the most light on this concept. He states that “SOA establishes an architectural model that aims to enhance the efficiency, agility, and productivity of an enterprise by positioning services as the primary means through which solution logic is represented in support of the realization of the strategic goals associated with service-oriented computing.” (Erl, 2011) Once again this definition proves that a collection of web services does not mean that a company is doing SOA. However, it does mean that a company has a collection of web services, and that is it.In order for a company to start to go down the path of SOA, they must take  a hard look at their existing business process while abstracting away any technology so that they can define what is they really want to accomplish. Once a company has done this, they can begin to factor out common sub business process like credit card process, user authentication or system notifications in to small components that can be built independent of each other and then reassembled to form new and dynamic services that are loosely coupled and agile in that they can change as a business grows.Another key pitfall of companies doing SOA is the fact that they let vendors drive their architecture. Why do companies do this? Vendors’ do not hold your company’s success as their top priority; in fact they hold their own success as their top priority by selling you as much stuff as you are willing to buy. In my experience companies tend to strive for the maximum amount of benefits with a minimal amount of cost. Does anyone else see any conflicts between this and the driving force behind vendors.Mike Kavis recommends in an article written in CIO.com that companies need to figure out what they need before they talk to a vendor or at least have some idea of what they need. It is important to thoroughly evaluate each vendor and watch them perform a live demo of their system so that you as the company fully understand what kind of product or service the vendor is actually offering. In addition, do research on each vendor that you are considering, check out blog posts, online reviews, and any information you can find on the vendor through various search engines.Finally he recommends companies to verify any recommendations supplied by a vendor. From personal experience this is very important. I can remember when the company I worked for purchased a $200,000 add-on to their phone system that never actually worked as it was intended. In fact, just after my departure from the company started the process of attempting to get their money back from the vendor. This potentially could have been avoided if the company had done the research before selecting this vendor to ensure that their product and vendor would live up to their claims. I know that some SOA vendor offer free training regarding SOA because they know that there are a lot of misconceptions about the topic. Superficially this is a great thing for companies to take part in especially if the company is starting to implement SOA architecture and are still unsure about some topics or are looking for some guidance regarding the topic. However beware that some companies will focus on their product line only regarding the training. As an example, InfoWorld.com claims that companies providing deep seminars disguised as training, focusing more about ESBs and SOA governance technology, and less on how to approach and solve the architectural issues of the attendees.In short, it is important to remember that we as software professionals are responsible for guiding a business’s technology sections should be well informed and fully understand any new concepts that may be considered for implementation. As I have demonstrated already a company that has a few web services does not mean that they are doing SOA.  Additionally, we must not let the new buzz word of the day drive our technology, but instead our technology decisions should be driven from research and proven experience. Finally, it is important to rely on vendors when necessary, however, always take what they say with a grain of salt while cross checking any claims that they may make because we have to live with the aftermath of a system after the vendors are gone.   References: Barry, D. K. (2011). Service-oriented architecture (SOA) definition. Retrieved 12 12, 2011, from Service-Architecture.com: http://www.service-architecture.com/web-services/articles/service-oriented_architecture_soa_definition.html Connell, B. (2003, 9). service-oriented architecture (SOA). Retrieved 12 12, 2011, from SearchSOA: http://searchsoa.techtarget.com/definition/service-oriented-architecture Erl, T. (2011, 12 12). Service-Oriented Architecture. Retrieved 12 12, 2011, from WhatIsSOA: http://www.whatissoa.com/p10.php InfoWorld. (2008, 6 1). Should you get your SOA knowledge from SOA vendors? . Retrieved 12 12, 2011, from InfoWorld.com: http://www.infoworld.com/d/architecture/should-you-get-your-soa-knowledge-soa-vendors-453 Kavis, M. (2008, 6 18). Top 10 Reasons Why People are Making SOA Fail. Retrieved 12 13, 2011, from CIO.com: http://www.cio.com/article/438413/Top_10_Reasons_Why_People_are_Making_SOA_Fail?page=5&taxonomyId=3016  

    Read the article

  • .NET Oracle Provider: Why will my stored proc not work?

    - by Matt
    I am using the Oracle .NET Provider and am calling a stored procedure in a package. The message I get back is "Wrong number or types in call". I have ensured that the order in which the parameters are being added are in the correct order and I have gone over the OracleDbType's thoroughly though I suspect that is where my problem is. Here is the code-behind: //setup intial stuff, connection and command string msg = string.Empty; string oraConnString = ConfigurationManager.ConnectionStrings["OracleServer"].ConnectionString; OracleConnection oraConn = new OracleConnection(oraConnString); OracleCommand oraCmd = new OracleCommand("PK_MOVEMENT.INSERT_REC", oraConn); oraCmd.CommandType = CommandType.StoredProcedure; try { //iterate the array //grab 3 items at a time and do db insert, continue until all items are gone. Will always be divisible by 3. for (int i = 0; i < theData.Length; i += 3) { //3 items hardcoded for now string millCenter = "0010260510"; string movementType = "RECEIPT"; string feedCode = null; string userID = "GRIMMETTM"; string inventoryType = "INGREDIENT"; //set to FINISHED for feed stuff string movementDate = theData[i + 0]; string ingCode = System.Text.RegularExpressions.Regex.Match(theData[i + 1], @"^([0-9]*)").ToString(); string pounds = theData[i + 2].Replace(",", ""); //setup parameters OracleParameter p1 = new OracleParameter("A_MILL_CENTER", OracleDbType.NVarchar2, 10); p1.Direction = ParameterDirection.Input; p1.Value = millCenter; oraCmd.Parameters.Add(p1); OracleParameter p2 = new OracleParameter("A_INGREDIENT_CODE", OracleDbType.NVarchar2, 50); p2.Direction = ParameterDirection.Input; p2.Value = ingCode; oraCmd.Parameters.Add(p2); OracleParameter p3 = new OracleParameter("A_FEED_CODE", OracleDbType.NVarchar2, 30); p3.Direction = ParameterDirection.Input; p3.Value = feedCode; oraCmd.Parameters.Add(p3); OracleParameter p4 = new OracleParameter("A_MOVEMENT_TYPE", OracleDbType.NVarchar2, 10); p4.Direction = ParameterDirection.Input; p4.Value = movementType; oraCmd.Parameters.Add(p4); OracleParameter p5 = new OracleParameter("A_MOVEMENT_DATE", OracleDbType.NVarchar2, 10); p5.Direction = ParameterDirection.Input; p5.Value = movementDate; oraCmd.Parameters.Add(p5); OracleParameter p6 = new OracleParameter("A_MOVEMENT_QTY", OracleDbType.Int64, 12); p6.Direction = ParameterDirection.Input; p6.Value = pounds; oraCmd.Parameters.Add(p6); OracleParameter p7 = new OracleParameter("INVENTORY_TYPE", OracleDbType.NVarchar2, 10); p7.Direction = ParameterDirection.Input; p7.Value = inventoryType; oraCmd.Parameters.Add(p7); OracleParameter p8 = new OracleParameter("A_CREATE_USERID", OracleDbType.NVarchar2, 20); p8.Direction = ParameterDirection.Input; p8.Value = userID; oraCmd.Parameters.Add(p8); OracleParameter p9 = new OracleParameter("A_RETURN_VALUE", OracleDbType.Int32, 10); p9.Direction = ParameterDirection.Output; oraCmd.Parameters.Add(p9); //open and execute oraConn.Open(); oraCmd.ExecuteNonQuery(); oraConn.Close(); } } catch (OracleException oraEx) { msg = "An error has occured in the database: " + oraEx.ToString(); } catch (Exception ex) { msg = "An error has occured: " + ex.ToString(); } finally { //close connection oraConn.Close(); } return msg;

    Read the article

  • How do I create a C# .NET Web Service that Posts items to a users Facebook Wall?

    - by Jourdan
    I'm currently toying around with the Clarity .NET Facebook API but am finding certain situations with authentication to be kind of limiting. I keep going through the tutorials but always end up hitting a brick wall with what I want to do. Perhaps I just cannot do it? I want to make a Web Service that takes in the require credentials (APIKey, SecretKey, UsersId (or Session Key?) and whatever else I would need), and then do various tasks: Post to users wall, add events etc. The problem I am having is this: The current documentation, examples and support provide a way to do this within the context of a Web site. Within this context, the required "connect" popup can be initiated and allow the user to authenticate and and connect the application. From that point on the Web can go on with its business to do what it needs to do. If I close the browser and come back to the page, I have to push the connect button again. Except this time, since I was already logged into facebook, I don't have to go through the whole connection process. But still ... How do applications like Tweetdeck get around this? They seemingly have you connect once, when you install their application, and you don't have to do it again. I would assume that this same idea would have to applied towards making a web service because: You don't know what context the user is in when making the Web service call. The web service methods being called could be coming from a Windows Form app, or code behind in a workflow.

    Read the article

  • What should I do when the managed object context fails to save?

    - by dontWatchMyProfile
    Example: I have an Cat entity with an catAge attribute. In the data modeler, I configured catAge as int with a max of 100. Then I do this: [newManagedObject setValue:[NSNumber numberWithInt:125] forKey:@"catAge"]; // Save the context. NSError *error = nil; if (![context save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); } I'm getting an error in the console, like this: 2010-06-12 11:40:41.947 CatTest[2250:207] Unresolved error Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x10164d0 "Operation could not be completed. (Cocoa error 1610.)", { NSLocalizedDescription = "Operation could not be completed. (Cocoa error 1610.)"; NSValidationErrorKey = catAge; NSValidationErrorObject = <NSManagedObject: 0x10099f0> (entity: Cat; id: 0x1006a90 <x-coredata:///Cat/t3BCBC34B-8405-4F16-B591-BE804B6811562> ; data: { catAge = 125; catName = "No Name"; }); NSValidationErrorPredicate = SELF <= 100; NSValidationErrorValue = 125; } Well, so I have an validation error. But the odd thing is, that it seems the MOC is broken after this. If I just tap "add" to add another invalid Cat object and save that, I'm getting this: 2010-06-12 11:45:13.857 CatTest[2250:207] Unresolved error Error Domain=NSCocoaErrorDomain Code=1560 UserInfo=0x1232170 "Operation could not be completed. (Cocoa error 1560.)", { NSDetailedErrors = ( Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x1215f00 "Operation could not be completed. (Cocoa error 1610.)", Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x1209fc0 "Operation could not be completed. (Cocoa error 1610.)" ); } That seems to report two errors now. BUT: When I try to delete now an valid, existing object from the table view (using the default core data template in a navigation-based app), then the app crashes! All I get in the console is: 2010-06-12 11:47:18.931 CatTest[2250:207] Unresolved error Error Domain=NSCocoaErrorDomain Code=1560 UserInfo=0x123eb30 "Operation could not be completed. (Cocoa error 1560.)", { NSDetailedErrors = ( Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x1217010 "Operation could not be completed. (Cocoa error 1610.)", Error Domain=NSCocoaErrorDomain Code=1610 UserInfo=0x123ea80 "Operation could not be completed. (Cocoa error 1610.)" ); } ...so no idea where or why it crashes, but it does. So the question is, what are the neccessary steps to take when there's an validation error?

    Read the article

  • how to see my managed objects on the stack?

    - by smwikipedia
    I use SOS.dll in VisualStudio to debug my C# program. The program is as below. The debug command is !DumpStackObjects. class Program { static void Main() { Int32 result = f(1); } static Int32 f(Int32 i) { Int32 j = i + 1; return j; <===========BreakPoint is here } } After I input the "!dso" command in the immediate window of Visual Studio, the result is as below: OS Thread Id: 0xf6c (3948) ESP/REG Object Name Why there's nothing? I thought there should be the args i and local variable j. Thanks for my answering my naive questions...

    Read the article

  • How to test IO code in JUnit?

    - by add
    I'm want to test two services: service which builds file name service which writes some data into file provided by 1st service In first i'm building some complex file structure (just for example {user}/{date}/{time}/{generatedId}.bin) In second i'm writing data to the file passed by first service (1st service calls 2nd service) How can I test both services using mocks without making any real IO interractions? Just for example: 1st service: public class DefaultLogService implements LogService { public void log(SomeComplexData data) { serializer.write(new FileOutputStream(buildComplexFileStructure()), data); or serializer.write(buildComplexFileStructure(), data); or serializer.write(new GenericInputEntity(buildComplexFileStructure()), data); } private ComplextDataSerializer serializer; // mocked in tests } 2nd service: public class DefaultComplexDataSerializer implements ComplexDataSerializer { void write(InputStream stream, SomeComplexData data) {...} or void write(File file, SomeCompexData data) {...} or void write(GenericInputEntity entity, SomeComplexData data) {...} } In first case i need to pass FileOutputStream which will create a file (i.e. i can't test 1st service) In second case i need to pass File. What can i do in 2nd service test if I need to test data which will be written to specified file? (i can't test 2nd service) In third case i think i need some generic IO object which will wrap File. Maybe there is some ready-to-use solution for this purpose?

    Read the article

  • Can a WCF Service provide publish/subscribe activity to a Linux-based C++ client application?

    - by Jeremy Roddingham
    I have a WCF service written to provide certain functionality to intranet-based clients. This is easy when a client is running Windows. I want to implement the same functionality for my Windows clients that is available to my linux clients. My questions are? How can I communicate to a linux c++ based client (supporting callback operations for a publish subscribe) type situation? I am aware of using SOAP over the HTTPBinding but is that the only way (does not support callbacks I believe)? Would the same apply if I were using TCPBinding on the service-side? Currently, the service is set up using TCP but what are my options for the linux client communcation? I read somewhere that messages can also be sent (via webservices I believe) in XML rather than SOAP? Which would be a better approach or how to determine which is a better approach? I am trying to understand the options I would have for a WCF data service if I wanted to communicate with it from a linux client. I appreciate all your help. Thank You, Jeremy

    Read the article

  • How do I create a .NET Web Service that Posts items to a users Facebook Wall?

    - by Jourdan
    I'm currently toying around with the Clarity .NET Facebook API but am finding certain situations with authentication to be kind of limiting. I keep going through the tutorials but always end up hitting a brick wall with what I want to do. Perhaps I just cannot do it? I want to make a Web Service that takes in the require credentials (APIKey, SecretKey, UsersId (or Session Key?) and whatever else I would need), and then do various tasks: Post to users wall, add events etc. The problem I am having is this: The current documentation, examples and support provide a way to do this within the context of a Web site. Within this context, the required "connect" popup can be initiated and allow the user to authenticate and and connect the application. From that point on the Web can go on with its business to do what it needs to do. If I close the browser and come back to the page, I have to push the connect button again. Except this time, since I was already logged into facebook, I don't have to go through the whole connection process. But still ... How do applications like Tweetdeck get around this? They seemingly have you connect once, when you install their application, and you don't have to do it again. I would assume that this same idea would have to applied towards making a web service because: You don't know what context the user is in when making the Web service call. The web service methods being called could be coming from a Windows Form app, or code behind in a workflow.

    Read the article

  • Defines JEE 5 the handling of commit error using bean managed transactions?

    - by marabol
    I'm using glassfish 2.1 and 2.1.1. If I've a bean method annotated by @TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW). After doing some JPA stuff the commit fails in the afterCompletion-Phase of JTS. GlassFish logs this failure only. And the caller of this bean method has no chance to know something goes wrong. So I wonder, if there is any definition how a jee 5 server has to handle exceptions while commiting. I would expect any runtime exception. I'm using stateless beans. With SessionSynchronisation I could get the commit failue, if I use statefull beans. Is it possible to intercept, so I can throw an exception, that I've declared in my interface? This is the whole exception stacktrace: [#|2010-05-06T12:15:54.840+0000|WARNING|sun-appserver2.1|oracle.toplink.essentials.session.file:/C:/glassfish/domains/domain1/applications/j2ee-apps/my-ear-1.0.0-SNAPSHOT/my-jar-1.1.8_jar/-myPu.transaction|_ThreadID=25;_ThreadName=p: thread-pool-1; w: 15;_RequestID=67a475a1-25c3-4416-abea-0d159f715373;| java.lang.RuntimeException: Got exception during XAResource.end: oracle.jdbc.xa.OracleXAException at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.delistResource(J2EETransactionManagerOpt.java:224) at com.sun.enterprise.resource.ResourceManagerImpl.unregisterResource(ResourceManagerImpl.java:265) at com.sun.enterprise.resource.ResourceManagerImpl.delistResource(ResourceManagerImpl.java:223) at com.sun.enterprise.resource.PoolManagerImpl.resourceClosed(PoolManagerImpl.java:400) at com.sun.enterprise.resource.ConnectorAllocator$ConnectionListenerImpl.connectionClosed(ConnectorAllocator.java:72) at com.sun.gjc.spi.ManagedConnection.connectionClosed(ManagedConnection.java:639) at com.sun.gjc.spi.base.ConnectionHolder.close(ConnectionHolder.java:201) at com.sun.gjc.spi.jdbc40.ConnectionHolder40.close(ConnectionHolder40.java:519) at oracle.toplink.essentials.internal.databaseaccess.DatabaseAccessor.closeDatasourceConnection(DatabaseAccessor.java:394) at oracle.toplink.essentials.internal.databaseaccess.DatasourceAccessor.closeConnection(DatasourceAccessor.java:382) at oracle.toplink.essentials.internal.databaseaccess.DatabaseAccessor.closeConnection(DatabaseAccessor.java:417) at oracle.toplink.essentials.internal.databaseaccess.DatasourceAccessor.afterJTSTransaction(DatasourceAccessor.java:115) at oracle.toplink.essentials.threetier.ClientSession.afterTransaction(ClientSession.java:119) at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.afterTransaction(UnitOfWorkImpl.java:1841) at oracle.toplink.essentials.transaction.AbstractSynchronizationListener.afterCompletion(AbstractSynchronizationListener.java:170) at oracle.toplink.essentials.transaction.JTASynchronizationListener.afterCompletion(JTASynchronizationListener.java:102) at com.sun.jts.jta.SynchronizationImpl.after_completion(SynchronizationImpl.java:154) at com.sun.jts.CosTransactions.RegisteredSyncs.distributeAfter(RegisteredSyncs.java:210) at com.sun.jts.CosTransactions.TopCoordinator.afterCompletion(TopCoordinator.java:2585) at com.sun.jts.CosTransactions.CoordinatorTerm.commit(CoordinatorTerm.java:433) at com.sun.jts.CosTransactions.TerminatorImpl.commit(TerminatorImpl.java:250) at com.sun.jts.CosTransactions.CurrentImpl.commit(CurrentImpl.java:623) at com.sun.jts.jta.TransactionManagerImpl.commit(TransactionManagerImpl.java:309) at com.sun.enterprise.distributedtx.J2EETransactionManagerImpl.commit(J2EETransactionManagerImpl.java:1029) at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.commit(J2EETransactionManagerOpt.java:398) at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:3817) at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:3610) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1379) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1316) at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:205) at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:127) at $Proxy127.myNewTxMethod(Unknown Source) at mypackage.MyBean2.myMethod(MyBean2.java:197) at mypackage.MyBean2.myMethod2(MyBean2.java:166) at mypackage.MyBean2.myMethod3(MyBean2.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1011) at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:175) at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2920) at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4011) at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:197) at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:127) at $Proxy158.myMethod3(Unknown Source) at mypackage.MyBean3.myMethod4(MyBean3.java:94) at mypackage.MyBean3.onMessage(MyBean3.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.security.SecurityUtil$2.run(SecurityUtil.java:181) at java.security.AccessController.doPrivileged(Native Method) at com.sun.enterprise.security.application.EJBSecurityManager.doAsPrivileged(EJBSecurityManager.java:985) at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:186) at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2920) at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4011) at com.sun.ejb.containers.MessageBeanContainer.deliverMessage(MessageBeanContainer.java:1111) at com.sun.ejb.containers.MessageBeanListenerImpl.deliverMessage(MessageBeanListenerImpl.java:74) at com.sun.enterprise.connectors.inflow.MessageEndpointInvocationHandler.invoke(MessageEndpointInvocationHandler.java:179) at $Proxy192.onMessage(Unknown Source) at com.sun.messaging.jms.ra.OnMessageRunner.run(OnMessageRunner.java:258) at com.sun.enterprise.connectors.work.OneWork.doWork(OneWork.java:76) at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:555) |#]

    Read the article

  • How to properly deal with KVO notifications when an managed object turns into a fault?

    - by dontWatchMyProfile
    From the docs: When Core Data turns an object into a fault, key-value observing (KVO) change notifications (see Key-Value Observing Programming Guide) are sent for the object’s properties. If you are observing properties of an object that is turned into a fault and the fault is subsequently realized, you receive change notifications for properties whose values have not in fact changed. So if an object turns into a fault, Core Data does send KVO notifications for changed properties? So I must always check for isFault == NO before beeing happy about the notification?

    Read the article

  • Is there a way to get an ASMX Web Service created in VS 2005 to receive and return JSON?

    - by Ben McCormack
    I'm using .NET 2.0 and Visual Studio 2005 to try to create a web service that can be consumed both as SOAP/XML and JSON. I read Dave Ward's Answer to the question How to return JSON from a 2.0 asmx web service (in addition to reading other articles at Encosia.com), but I can't figure out how I need to set up the code of my asmx file in order to work with JSON using jQuery. Two Questions: How do I enable JSON in my .NET 2.0 ASMX file? What's a simple jQuery call that could consume the service using JSON? Also, I notice that since I'm using .NET 2.0, I i'm not able to implement using System.Web.Script.Services.ScriptService. Here's my C# code for the demo ASMX service: using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; /// <summary> /// Summary description for StockQuote /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class StockQuote : System.Web.Services.WebService { public StockQuote () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public decimal GetStockQuote(string ticker) { //perform database lookup here return 8; } [WebMethod] public string HelloWorld() { return "Hello World"; } } Here's a snippet of jQuery I found on the internet and tried to modify: $(document).ready(function(){ $("#btnSubmit").click(function(event){ $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "http://bmccorm-xp/WebServices/HelloWorld.asmx", data: "", dataType: "json" }) event.preventDefault(); }); });

    Read the article

  • How can we call an activity through service in android???

    - by Shalini Singh
    Hi! friends, i am a android developer,,, want to know is it possible to call an activity through background service in android like : import android.app.Service; import android.content.Intent; import android.content.SharedPreferences; import android.media.MediaPlayer; import android.os.Handler; import android.os.IBinder; import android.os.Message; public class background extends Service{ private int timer1; @Override public void onCreate() { // TODO Auto-generated method stub super.onCreate(); SharedPreferences preferences = getSharedPreferences("SaveTime", MODE_PRIVATE); timer1 = preferences.getInt("time", 0); startservice(); } @Override public IBinder onBind(Intent arg0) { // TODO Auto-generated method stub return null; } private void startservice() { Handler handler = new Handler(); handler.postDelayed(new Runnable(){ public void run() { mediaPlayerPlay.sendEmptyMessage(0); } }, timer1*60*1000); } private Handler mediaPlayerPlay = new Handler(){ @Override public void handleMessage(Message msg) { try { getApplication(); MediaPlayer mp = new MediaPlayer(); mp = MediaPlayer.create(background.this, R.raw.alarm); mp.start(); } catch(Exception e) { e.printStackTrace(); } super.handleMessage(msg); } }; /* * (non-Javadoc) * * @see android.app.Service#onDestroy() */ @Override public void onDestroy() { // TODO Auto-generated method stub super.onDestroy(); } } i want to call my activity......

    Read the article

  • Can a stateless WCF service benefit from built-in database connection pooling?

    - by vladimir
    I understand that a typical .NET application that accesses a(n SQL Server) database doesn't have to do anything in particular in order to benefit from the connection pooling. Even if an application repeatedly opens and closes database connections, they do get pooled by the framework (assuming that things such as credentials do not change from call to call). My usage scenario seems to be a bit different. When my service gets instantiated, it opens a database connection once, does some work, closes the connection and returns the result. Then it gets torn down by the WCF, and the next incoming call creates a new instance of the service. In other words, my service gets instantiated per client call, as in [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]. The service accesses an SQL Server 2008 database. I'm using .NET framework 3.5 SP1. Does the connection pooling still work in this scenario, or I need to roll my own connection pool in form of a singleton or by some other means (IInstanceContextProvider?). I would rather avoid reinventing the wheel, if possible.

    Read the article

  • How to call a service operation at a REST style WCF endpoint uri?

    - by Dieter Domanski
    Hi, is it possible to call a service operation at a wcf endpoint uri with a self hosted service? I want to call some default service operation when the client enters the endpoint uri of the service. In the following sample these uris correctly call the declared operations (SayHello, SayHi): - http://localhost:4711/clerk/hello - http://localhost:4711/clerk/hi But the uri - http://localhost:4711/clerk does not call the declared SayWelcome operation. Instead it leads to the well known 'Metadata publishing disabled' page. Enabling mex does not help, in this case the mex page is shown at the endpoint uri. private void StartSampleServiceHost() { ServiceHost serviceHost = new ServiceHost(typeof(Clerk), new Uri( "http://localhost:4711/clerk/")); ServiceEndpoint endpoint = serviceHost.AddServiceEndpoint(typeof(IClerk), new WebHttpBinding(), ""); endpoint.Behaviors.Add(new WebHttpBehavior()); serviceHost.Open(); } [ServiceContract] public interface IClerk { [OperationContract, WebGet(UriTemplate = "")] Stream SayWelcome(); [OperationContract, WebGet(UriTemplate = "/hello/")] Stream SayHello(); [OperationContract, WebGet(UriTemplate = "/hi/")] Stream SayHi(); } public class Clerk : IClerk { public Stream SayWelcome() { return Say("welcome"); } public Stream SayHello() { return Say("hello"); } public Stream SayHi() { return Say("hi"); } private Stream Say(string what) { string page = @"<html><body>" + what + "</body></html>"; return new MemoryStream(Encoding.UTF8.GetBytes(page)); } } Is there any way to disable the mex handling and to enable a declared operation instead? Thanks in advance, Dieter

    Read the article

  • New Session is created between consecutive servlet request from an Applet and a managed bean??

    - by khue
    Hi, I want to pass parameters betweeen applet and jsf components So when a value of a input textbox changed, its binding backing bean makes connection to a servlet. The servlet create an attribute and save to HttpSession using (request.getSession(true)).setAttribute(name, value); Then at some event, applet will access another servlet. This servlet will try to retrieve the Attribute saved to Session previously. However, everytime, the attirbute returned is null as the new session is created instead. My question is: Is the session should be persist? ( I checked allowcookies, session timeout for weblogic) If yes, what might go wrong with my app? Thanks a lot for your help. Regards K.

    Read the article

  • What type of web service should I put together?

    - by Jake
    I want to write a web service using Visual Studio. The service needs to support some type of authentication, and should be able to receive commands via simple HTTP GET requests. The input would only be a method call with some parameters, and the responses will be simple status/error codes. My instinct would be to go with an ASP.NET Web Service, but this isn't an option in C# 4.0 and it makes me wonder if I should be using something that's more up-to-date. I've looked into WCF, but it seems like this requires a running application on the client-side - is there a way to query a WCF host by just accessing a URL? The authentication is also an important piece. Developing my own little authentication system seems like a bad idea - I've read that it's too easy to mess up. What would be the standard way of authenticating with a web service like this? I'd love to look up all of the specifics on this and learn it myself, but I really don't even know where to begin. Some direction would be greatly appreciated!

    Read the article

  • how to edit a text message ( sms ) as it arrives in windows mobile 6 using managed code

    - by x86shadow
    I want to make an application to be installed on two pocket PCs and send Encrypted text messages and receive and decrypt them on the other device. I already made an application that gets special text messages ( starting with !farenc! ) and I know how to Encrypt/Decrypt the messages as well but I don't know how to edit a text message as it's arrive ( for decryption ). please help. thanks in advance and sorry for my bad English using System; using System.Linq; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using Microsoft.WindowsMobile; using Microsoft.WindowsMobile.PocketOutlook; using Microsoft.WindowsMobile.PocketOutlook.MessageInterception; namespace SMSDECRYPT { public partial class Form1 : Form { MessageInterceptor _SMSCatcher = new MessageInterceptor(InterceptionAction.Notify, true); MessageCondition _SMSFilter = new MessageCondition(); public Form1() { InitializeComponent(); _SMSFilter.Property = MessageProperty.Body; _SMSFilter.ComparisonType = MessagePropertyComparisonType.StartsWith; _SMSFilter.CaseSensitive = true; _SMSFilter.ComparisonValue = "!farenc!"; _SMSCatcher.MessageCondition = _SMSFilter; _SMSCatcher.MessageReceived += new MessageInterceptorEventHandler(_SMSCatcher_MessageReceived); } private void Form1_Load(object sender, EventArgs e) { //... } void _SMSCatcher_MessageReceived(object sender, MessageInterceptorEventArgs e) { SmsMessage mySMS = (SmsMessage)e.Message; string sms = mySMS.Body.ToString(); sms = sms.Substring(8); //Decryption //... //Update the received message and replace it with the decrypted text //!!!HELP!!! } } }

    Read the article

  • Has anyone managed to get Visual Studio 2003 running on Windows 7?

    - by Jeremy White
    Yes, I know... I could set up a virtual machine running XP. Unfortunately our build environment is such that we need to be running VC2003, 2005 and 2008 concurrently and it would be much more convenient if I could run 2003 natively on Windows 7 for the few projects we have that require it. I realize some things may not be available in the IDE, but I was able to run 2003 under windows Vista and if I could get the same base level of functionality under Windows 7 I would be extremely happy. Right now I get an error opening the *.pdb file when I compile after switching vc2003 to run as Administrator under compatibility mode for XP SP 2. Thanks!

    Read the article

  • Installing a Windows Service from a separate GUI - how to install .config file along with it?

    - by Shaul
    I have written a GUI (call it MyGUI) for ClickOnce deployment on any given client site. That GUI installs and configures a Windows Service (MyService), using the method described here by @Marc Gravell. Here's my code, run from inside MyGUI, which contains a reference to MyService: using (var inst = new AssemblyInstaller(typeof(MyService.Program).Assembly, new string[] { })) { IDictionary state = new Hashtable(); inst.UseNewContext = true; try { if (uninstall) { inst.Uninstall(state); } else { inst.Install(state); inst.Commit(state); } } catch { try { inst.Rollback(state); } catch { } throw; } } Take note of that first line: I'm grabbing the assembly for MyService, and installing that. Now, trouble is, the way I've done the deployment, I'm effectively referencing the service's EXE file from the GUI's app folder. So now the service fires up and starts looking for stuff in the MyService.config file, and can't find it, because it's living in someone else's app folder, with only the GUI's MyGUI.config file present. So, how do I get MyService.config to be available to the service?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >