Search Results

Search found 20702 results on 829 pages for 'service rec'.

Page 169/829 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • ASP.NET Web Service - how to handle special characters in strings?

    - by Vlorg
    To show this fundamental issue in .NET and the reason for this question, I have written a simple test web service with one method (EditString), and a consumer console app that calls it. They are both standard web service/console applications created via File/New Project, etc., so I won't list the whole code - just the methods in question: Web method: [WebMethod] public string EditString(string s, bool useSpecial) { return s + (useSpecial ? ((char)19).ToString() : ""); } [You can see it simply returns the string s if useSpecial is false. If useSpecial is true, it returns s + char 19.] Console app: TestService.Service1 service = new SCTestConsumer.TestService.Service1(); string response1 = service.EditString("hello", false); Console.WriteLine(response1); string response2 = service.EditString("hello", true); // fails! Console.WriteLine(response2); [The second response fails, because the method returns hello + a special character (ascii code 19 for argument's sake).] The error is: There is an error in XML document (1, 287) Inner exception: "'', hexadecimal value 0x13, is an invalid character. Line 1, position 287." A few points worth mentioning: The web method itself WORKS FINE when browsing directly to the ASMX file (e.g. http://localhost:2065/service1.asmx), and running the method through this (with the same parameters as in the console application) - i.e. displays XML with the string hello + char 19. Checking the serialized XML in other ways shows the special character is being encoded properly (the SERVER SIDE seems to be ok which is GOOD) So it seems the CLIENT SIDE has the issue - i.e. the .NET generated proxy class code doesn't handle special characters This is part of a bigger project where objects are passed in and out of the web methods - that contain string attributes - these are what need to work properly. i.e. we're de/serializing classes. Any suggestions for a workaround and how to implement it? Or have I completely missed something really obvious!!? Thanks in advance... PS. I've not had much luck with getting it to use CDATA tags (does .NET support these out of the box?).

    Read the article

  • Streaming binary data to WCF rest service gives Bad Request (400) when content length is greater than 64k

    - by Mikey Cee
    I have a WCF service that takes a stream: [ServiceContract] public class UploadService : BaseService { [OperationContract] [WebInvoke(BodyStyle=WebMessageBodyStyle.Bare, Method=WebRequestMethods.Http.Post)] public void Upload(Stream data) { // etc. } } This method is to allow my Silverlight application to upload large binary files, the easiest way being to craft the HTTP request by hand from the client. Here is the code in the Silverlight client that does this: const int contentLength = 64 * 1024; // 64 Kb var request = (HttpWebRequest)WebRequest.Create("http://localhost:8732/UploadService/"); request.AllowWriteStreamBuffering = false; request.Method = WebRequestMethods.Http.Post; request.ContentType = "application/octet-stream"; request.ContentLength = contentLength; using (var outputStream = request.GetRequestStream()) { outputStream.Write(new byte[contentLength], 0, contentLength); outputStream.Flush(); using (var response = request.GetResponse()); } Now, in the case above, where I am streaming 64 kB of data (or less), this works OK and if I set a breakpoint in my WCF method, and I can examine the stream and see 64 kB worth of zeros - yay! The problem arises if I send anything more than 64 kB of data, for instance by changing the first line of my client code to the following: const int contentLength = 64 * 1024 + 1; // 64 kB + 1 B This now throws an exception when I call request.GetResponse(): The remote server returned an error: (400) Bad Request. In my WCF configuration I have set maxReceivedMessageSize, maxBufferSize and maxBufferPoolSize to 2147483647, but to no avail. Here are the relevant sections from my service's app.config: <service name="UploadService"> <endpoint address="" binding="webHttpBinding" bindingName="StreamedRequestWebBinding" contract="UploadService" behaviorConfiguration="webBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8732/UploadService/" /> </baseAddresses> </host> </service> <bindings> <webHttpBinding> <binding name="StreamedRequestWebBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:05:00" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" transferMode="StreamedRequest"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <endpointBehaviors> </behaviors> How do I make my service accept more than 64 kB of streamed post data?

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • SOA Implementation Challenges

    Why do companies think that if they put up a web service that they are doing Service-Oriented Architecture (SOA)? Unfortunately, the IT and business world love to run on the latest hype or buzz words of which very few even understand the meaning. One of the largest issues companies have today as they consider going down the path of SOA, is the lack of knowledge regarding the architectural style and the over usage of the term SOA. So how do we solve this issue?I am sure most of you are thinking by now that you know what SOA is because you developed a few web services.  Isn’t that SOA, right? No, that is not SOA, but instead Just Another Web Service (JAWS). For us to better understand what SOA is let’s look at a few definitions.Douglas K. Bary defines service-oriented architecture as a collection of services. These services are enabled to communicate with each other in order to pass data or coordinating some activity with other services.If you look at this definition closely you will notice that Bary states that services communicate with each other. Let us compare this statement with my first statement regarding companies that claim to be doing SOA when they have just a collection of web services. In order for these web services to for an SOA application they need to be interdependent on one another forming some sort of architectural hierarchy. Just because a company has a few web services does not mean that they are all interconnected.SearchSOA from TechTarget.com states that SOA defines how two computing entities work collectively to enable one entity to perform a unit of work on behalf of another. Once again, just because a company has a few web services does not guarantee that they are even working together let alone if they are performing work for each other.SearchSOA also points out service interactions should be self-contained and loosely-coupled so that all interactions operate independent of each other.Of all the definitions regarding SOA Thomas Erl’s seems to shed the most light on this concept. He states that “SOA establishes an architectural model that aims to enhance the efficiency, agility, and productivity of an enterprise by positioning services as the primary means through which solution logic is represented in support of the realization of the strategic goals associated with service-oriented computing.” (Erl, 2011) Once again this definition proves that a collection of web services does not mean that a company is doing SOA. However, it does mean that a company has a collection of web services, and that is it.In order for a company to start to go down the path of SOA, they must take  a hard look at their existing business process while abstracting away any technology so that they can define what is they really want to accomplish. Once a company has done this, they can begin to factor out common sub business process like credit card process, user authentication or system notifications in to small components that can be built independent of each other and then reassembled to form new and dynamic services that are loosely coupled and agile in that they can change as a business grows.Another key pitfall of companies doing SOA is the fact that they let vendors drive their architecture. Why do companies do this? Vendors’ do not hold your company’s success as their top priority; in fact they hold their own success as their top priority by selling you as much stuff as you are willing to buy. In my experience companies tend to strive for the maximum amount of benefits with a minimal amount of cost. Does anyone else see any conflicts between this and the driving force behind vendors.Mike Kavis recommends in an article written in CIO.com that companies need to figure out what they need before they talk to a vendor or at least have some idea of what they need. It is important to thoroughly evaluate each vendor and watch them perform a live demo of their system so that you as the company fully understand what kind of product or service the vendor is actually offering. In addition, do research on each vendor that you are considering, check out blog posts, online reviews, and any information you can find on the vendor through various search engines.Finally he recommends companies to verify any recommendations supplied by a vendor. From personal experience this is very important. I can remember when the company I worked for purchased a $200,000 add-on to their phone system that never actually worked as it was intended. In fact, just after my departure from the company started the process of attempting to get their money back from the vendor. This potentially could have been avoided if the company had done the research before selecting this vendor to ensure that their product and vendor would live up to their claims. I know that some SOA vendor offer free training regarding SOA because they know that there are a lot of misconceptions about the topic. Superficially this is a great thing for companies to take part in especially if the company is starting to implement SOA architecture and are still unsure about some topics or are looking for some guidance regarding the topic. However beware that some companies will focus on their product line only regarding the training. As an example, InfoWorld.com claims that companies providing deep seminars disguised as training, focusing more about ESBs and SOA governance technology, and less on how to approach and solve the architectural issues of the attendees.In short, it is important to remember that we as software professionals are responsible for guiding a business’s technology sections should be well informed and fully understand any new concepts that may be considered for implementation. As I have demonstrated already a company that has a few web services does not mean that they are doing SOA.  Additionally, we must not let the new buzz word of the day drive our technology, but instead our technology decisions should be driven from research and proven experience. Finally, it is important to rely on vendors when necessary, however, always take what they say with a grain of salt while cross checking any claims that they may make because we have to live with the aftermath of a system after the vendors are gone.   References: Barry, D. K. (2011). Service-oriented architecture (SOA) definition. Retrieved 12 12, 2011, from Service-Architecture.com: http://www.service-architecture.com/web-services/articles/service-oriented_architecture_soa_definition.html Connell, B. (2003, 9). service-oriented architecture (SOA). Retrieved 12 12, 2011, from SearchSOA: http://searchsoa.techtarget.com/definition/service-oriented-architecture Erl, T. (2011, 12 12). Service-Oriented Architecture. Retrieved 12 12, 2011, from WhatIsSOA: http://www.whatissoa.com/p10.php InfoWorld. (2008, 6 1). Should you get your SOA knowledge from SOA vendors? . Retrieved 12 12, 2011, from InfoWorld.com: http://www.infoworld.com/d/architecture/should-you-get-your-soa-knowledge-soa-vendors-453 Kavis, M. (2008, 6 18). Top 10 Reasons Why People are Making SOA Fail. Retrieved 12 13, 2011, from CIO.com: http://www.cio.com/article/438413/Top_10_Reasons_Why_People_are_Making_SOA_Fail?page=5&taxonomyId=3016  

    Read the article

  • Creating generic list of instances of a class.

    - by Jim Branson
    I have several projects where I build a dictionary from a small class. I'm using C# 2008, Visual studio 2008 and .net 3.5 This is the code: namespace ReportsTest { class Junk { public static Dictionary<string, string> getPlatKeys() { Dictionary<string, string> retPlatKeys = new Dictionary<string, string>(); SqlConnection conn = new SqlConnection("Data Source=JB55LTARL;Initial Catalog=HldiReports;Integrated Security=True"); SqlDataReader Dr = null; conn.Open(); SqlCommand cmnd = new SqlCommand("SELECT Make, Series, RedesignYear, SeriesName FROM CompPlatformKeys", conn); Dr = cmnd.ExecuteReader(); while (Dr.Read()) { utypPlatKeys rec = new utypPlatKeys(Dr); retPlatKeys.Add(rec.Make + rec.Series + rec.RedesignYear, rec.SeriesName); } conn = null; Dr = null; return retPlatKeys; } } public class utypPlatKeys { public string Make { get; set; } public string Series { get; set; } public string RedesignYear { get; set; } public string SeriesName { get; set; } public utypPlatKeys(SqlDataReader dr) { this.Make = dr.GetInt16(dr.GetOrdinal("Make")).ToString("D3"); this.Series = dr.GetInt16(dr.GetOrdinal("Series")).ToString("D3"); this.RedesignYear = dr.GetInt16(dr.GetOrdinal("RedesignYear")).ToString(); this.SeriesName = dr["SeriesName"].ToString(); } } } The immediate window shows all of the entries in retPlatKeys and if you hover over retPlatKeys after loading it indicates the number of elements like this: "retPlatKeys| Count = 923 which is correct. I went to create a new project using this pattern only now the immediate window says retPlatKeys is out of scope and hovering over retPlatKeys after loading I get something like retPlatKeys|0x0000000002578900. Any help is greatly appreciated.

    Read the article

  • How do I create a C# .NET Web Service that Posts items to a users Facebook Wall?

    - by Jourdan
    I'm currently toying around with the Clarity .NET Facebook API but am finding certain situations with authentication to be kind of limiting. I keep going through the tutorials but always end up hitting a brick wall with what I want to do. Perhaps I just cannot do it? I want to make a Web Service that takes in the require credentials (APIKey, SecretKey, UsersId (or Session Key?) and whatever else I would need), and then do various tasks: Post to users wall, add events etc. The problem I am having is this: The current documentation, examples and support provide a way to do this within the context of a Web site. Within this context, the required "connect" popup can be initiated and allow the user to authenticate and and connect the application. From that point on the Web can go on with its business to do what it needs to do. If I close the browser and come back to the page, I have to push the connect button again. Except this time, since I was already logged into facebook, I don't have to go through the whole connection process. But still ... How do applications like Tweetdeck get around this? They seemingly have you connect once, when you install their application, and you don't have to do it again. I would assume that this same idea would have to applied towards making a web service because: You don't know what context the user is in when making the Web service call. The web service methods being called could be coming from a Windows Form app, or code behind in a workflow.

    Read the article

  • How to test IO code in JUnit?

    - by add
    I'm want to test two services: service which builds file name service which writes some data into file provided by 1st service In first i'm building some complex file structure (just for example {user}/{date}/{time}/{generatedId}.bin) In second i'm writing data to the file passed by first service (1st service calls 2nd service) How can I test both services using mocks without making any real IO interractions? Just for example: 1st service: public class DefaultLogService implements LogService { public void log(SomeComplexData data) { serializer.write(new FileOutputStream(buildComplexFileStructure()), data); or serializer.write(buildComplexFileStructure(), data); or serializer.write(new GenericInputEntity(buildComplexFileStructure()), data); } private ComplextDataSerializer serializer; // mocked in tests } 2nd service: public class DefaultComplexDataSerializer implements ComplexDataSerializer { void write(InputStream stream, SomeComplexData data) {...} or void write(File file, SomeCompexData data) {...} or void write(GenericInputEntity entity, SomeComplexData data) {...} } In first case i need to pass FileOutputStream which will create a file (i.e. i can't test 1st service) In second case i need to pass File. What can i do in 2nd service test if I need to test data which will be written to specified file? (i can't test 2nd service) In third case i think i need some generic IO object which will wrap File. Maybe there is some ready-to-use solution for this purpose?

    Read the article

  • Can a WCF Service provide publish/subscribe activity to a Linux-based C++ client application?

    - by Jeremy Roddingham
    I have a WCF service written to provide certain functionality to intranet-based clients. This is easy when a client is running Windows. I want to implement the same functionality for my Windows clients that is available to my linux clients. My questions are? How can I communicate to a linux c++ based client (supporting callback operations for a publish subscribe) type situation? I am aware of using SOAP over the HTTPBinding but is that the only way (does not support callbacks I believe)? Would the same apply if I were using TCPBinding on the service-side? Currently, the service is set up using TCP but what are my options for the linux client communcation? I read somewhere that messages can also be sent (via webservices I believe) in XML rather than SOAP? Which would be a better approach or how to determine which is a better approach? I am trying to understand the options I would have for a WCF data service if I wanted to communicate with it from a linux client. I appreciate all your help. Thank You, Jeremy

    Read the article

  • How do I create a .NET Web Service that Posts items to a users Facebook Wall?

    - by Jourdan
    I'm currently toying around with the Clarity .NET Facebook API but am finding certain situations with authentication to be kind of limiting. I keep going through the tutorials but always end up hitting a brick wall with what I want to do. Perhaps I just cannot do it? I want to make a Web Service that takes in the require credentials (APIKey, SecretKey, UsersId (or Session Key?) and whatever else I would need), and then do various tasks: Post to users wall, add events etc. The problem I am having is this: The current documentation, examples and support provide a way to do this within the context of a Web site. Within this context, the required "connect" popup can be initiated and allow the user to authenticate and and connect the application. From that point on the Web can go on with its business to do what it needs to do. If I close the browser and come back to the page, I have to push the connect button again. Except this time, since I was already logged into facebook, I don't have to go through the whole connection process. But still ... How do applications like Tweetdeck get around this? They seemingly have you connect once, when you install their application, and you don't have to do it again. I would assume that this same idea would have to applied towards making a web service because: You don't know what context the user is in when making the Web service call. The web service methods being called could be coming from a Windows Form app, or code behind in a workflow.

    Read the article

  • Is it possible to use OAuth starting from the service provider website?

    - by Brian Armstrong
    I want to let people create apps that use my API and authenticate them with OAuth. Normally this process starts from the consumer service website (say TwitPic) and they request an access token from the service provider (Twitter). The user is then taken to the service provider website where they have to allow/deny access to to the consumer. I'm wondering if it's possible to initiate this process from the service provider website instead. So in this example you would start on Twitter's site, and maybe there is a section marked "do you want to turn on access for TwitPic?". If you click yes, it passes the access token directly to TwitPic which now has access to your account. Basically, fewer steps. I'm looking at the OAuth docs and it looks like the request token is generated on the consumer side and used later to turn it into an access token. So it's not really designed with what I described above in mind, but I thought there might be a way. http://oauth.net/core/1.0/ (Search for "steps") Thanks!

    Read the article

  • Extracting specific nodes from XML using XML::Twig

    - by pratz
    i was trying to extract a particular set of nodes from the following XML structure using XML::Twig, but have been stuck ever since. I need to extract the 'player' nodes from the following structure and do a string match/replace on each of these node values. <pep:record> <agency> <subrecord type="scout"> <isnum>123XXX (print)</isnum> <isnum>234YYY (mag)</isnum> </subrecord> <subrecord type="group"> </subrecord> </agency </record> I tried using the following code, but I get pointed to a hash reference rather than actual string. my $parser = XML::Twig->new(twig_handlers => { isnum => sub { print $_->text."::" }, }); foreach my $rec (split(/::/, $parser->parse($my_xml))) { if ($rec =~ m/print/) { ($print = $rec) =~ s/( \(print\))//; } elsif($rec =~ m/mag/) { ($mag = $rec) =~ s/( \(mag\))//; } }

    Read the article

  • Is there a way to get an ASMX Web Service created in VS 2005 to receive and return JSON?

    - by Ben McCormack
    I'm using .NET 2.0 and Visual Studio 2005 to try to create a web service that can be consumed both as SOAP/XML and JSON. I read Dave Ward's Answer to the question How to return JSON from a 2.0 asmx web service (in addition to reading other articles at Encosia.com), but I can't figure out how I need to set up the code of my asmx file in order to work with JSON using jQuery. Two Questions: How do I enable JSON in my .NET 2.0 ASMX file? What's a simple jQuery call that could consume the service using JSON? Also, I notice that since I'm using .NET 2.0, I i'm not able to implement using System.Web.Script.Services.ScriptService. Here's my C# code for the demo ASMX service: using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; /// <summary> /// Summary description for StockQuote /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class StockQuote : System.Web.Services.WebService { public StockQuote () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public decimal GetStockQuote(string ticker) { //perform database lookup here return 8; } [WebMethod] public string HelloWorld() { return "Hello World"; } } Here's a snippet of jQuery I found on the internet and tried to modify: $(document).ready(function(){ $("#btnSubmit").click(function(event){ $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "http://bmccorm-xp/WebServices/HelloWorld.asmx", data: "", dataType: "json" }) event.preventDefault(); }); });

    Read the article

  • How can we call an activity through service in android???

    - by Shalini Singh
    Hi! friends, i am a android developer,,, want to know is it possible to call an activity through background service in android like : import android.app.Service; import android.content.Intent; import android.content.SharedPreferences; import android.media.MediaPlayer; import android.os.Handler; import android.os.IBinder; import android.os.Message; public class background extends Service{ private int timer1; @Override public void onCreate() { // TODO Auto-generated method stub super.onCreate(); SharedPreferences preferences = getSharedPreferences("SaveTime", MODE_PRIVATE); timer1 = preferences.getInt("time", 0); startservice(); } @Override public IBinder onBind(Intent arg0) { // TODO Auto-generated method stub return null; } private void startservice() { Handler handler = new Handler(); handler.postDelayed(new Runnable(){ public void run() { mediaPlayerPlay.sendEmptyMessage(0); } }, timer1*60*1000); } private Handler mediaPlayerPlay = new Handler(){ @Override public void handleMessage(Message msg) { try { getApplication(); MediaPlayer mp = new MediaPlayer(); mp = MediaPlayer.create(background.this, R.raw.alarm); mp.start(); } catch(Exception e) { e.printStackTrace(); } super.handleMessage(msg); } }; /* * (non-Javadoc) * * @see android.app.Service#onDestroy() */ @Override public void onDestroy() { // TODO Auto-generated method stub super.onDestroy(); } } i want to call my activity......

    Read the article

  • Can a stateless WCF service benefit from built-in database connection pooling?

    - by vladimir
    I understand that a typical .NET application that accesses a(n SQL Server) database doesn't have to do anything in particular in order to benefit from the connection pooling. Even if an application repeatedly opens and closes database connections, they do get pooled by the framework (assuming that things such as credentials do not change from call to call). My usage scenario seems to be a bit different. When my service gets instantiated, it opens a database connection once, does some work, closes the connection and returns the result. Then it gets torn down by the WCF, and the next incoming call creates a new instance of the service. In other words, my service gets instantiated per client call, as in [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]. The service accesses an SQL Server 2008 database. I'm using .NET framework 3.5 SP1. Does the connection pooling still work in this scenario, or I need to roll my own connection pool in form of a singleton or by some other means (IInstanceContextProvider?). I would rather avoid reinventing the wheel, if possible.

    Read the article

  • Combinations and Permutations in F#

    - by Noldorin
    I've recently written the following combinations and permutations functions for an F# project, but I'm quite aware they're far from optimised. /// Rotates a list by one place forward. let rotate lst = List.tail lst @ [List.head lst] /// Gets all rotations of a list. let getRotations lst = let rec getAll lst i = if i = 0 then [] else lst :: (getAll (rotate lst) (i - 1)) getAll lst (List.length lst) /// Gets all permutations (without repetition) of specified length from a list. let rec getPerms n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, _ -> lst |> getRotations |> Seq.collect (fun r -> Seq.map ((@) [List.head r]) (getPerms (k - 1) (List.tail r))) /// Gets all permutations (with repetition) of specified length from a list. let rec getPermsWithRep n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, _ -> lst |> Seq.collect (fun x -> Seq.map ((@) [x]) (getPermsWithRep (k - 1) lst)) // equivalent: | k, _ -> lst |> getRotations |> Seq.collect (fun r -> List.map ((@) [List.head r]) (getPermsWithRep (k - 1) r)) /// Gets all combinations (without repetition) of specified length from a list. let rec getCombs n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, (x :: xs) -> Seq.append (Seq.map ((@) [x]) (getCombs (k - 1) xs)) (getCombs k xs) /// Gets all combinations (with repetition) of specified length from a list. let rec getCombsWithRep n lst = match n, lst with | 0, _ -> seq [[]] | _, [] -> seq [] | k, (x :: xs) -> Seq.append (Seq.map ((@) [x]) (getCombsWithRep (k - 1) lst)) (getCombsWithRep k xs) Does anyone have any suggestions for how these functions (algorithms) can be sped up? I'm particularly interested in how the permutation (with and without repetition) ones can be improved. The business involving rotations of lists doesn't look too efficient to me in retrospect. Update Here's my new implementation for the getPerms function, inspired by Tomas's answer. Unfortunately, it's not really any fast than the existing one. Suggestions? let getPerms n lst = let rec getPermsImpl acc n lst = seq { match n, lst with | k, x :: xs -> if k > 0 then for r in getRotations lst do yield! getPermsImpl (List.head r :: acc) (k - 1) (List.tail r) if k >= 0 then yield! getPermsImpl acc k [] | 0, [] -> yield acc | _, [] -> () } getPermsImpl List.empty n lst

    Read the article

  • How to call a service operation at a REST style WCF endpoint uri?

    - by Dieter Domanski
    Hi, is it possible to call a service operation at a wcf endpoint uri with a self hosted service? I want to call some default service operation when the client enters the endpoint uri of the service. In the following sample these uris correctly call the declared operations (SayHello, SayHi): - http://localhost:4711/clerk/hello - http://localhost:4711/clerk/hi But the uri - http://localhost:4711/clerk does not call the declared SayWelcome operation. Instead it leads to the well known 'Metadata publishing disabled' page. Enabling mex does not help, in this case the mex page is shown at the endpoint uri. private void StartSampleServiceHost() { ServiceHost serviceHost = new ServiceHost(typeof(Clerk), new Uri( "http://localhost:4711/clerk/")); ServiceEndpoint endpoint = serviceHost.AddServiceEndpoint(typeof(IClerk), new WebHttpBinding(), ""); endpoint.Behaviors.Add(new WebHttpBehavior()); serviceHost.Open(); } [ServiceContract] public interface IClerk { [OperationContract, WebGet(UriTemplate = "")] Stream SayWelcome(); [OperationContract, WebGet(UriTemplate = "/hello/")] Stream SayHello(); [OperationContract, WebGet(UriTemplate = "/hi/")] Stream SayHi(); } public class Clerk : IClerk { public Stream SayWelcome() { return Say("welcome"); } public Stream SayHello() { return Say("hello"); } public Stream SayHi() { return Say("hi"); } private Stream Say(string what) { string page = @"<html><body>" + what + "</body></html>"; return new MemoryStream(Encoding.UTF8.GetBytes(page)); } } Is there any way to disable the mex handling and to enable a declared operation instead? Thanks in advance, Dieter

    Read the article

  • what wrong in the program using tie and array

    - by SCNCN2010
    File : #comment1 #comment2 #comment3 #START HERE a: [email protected] b: [email protected] my perl program : use Data::Dumper; use Tie::File; tie my @array, 'Tie::File', 'ala.txt' or die $!; my $rec = 'p: [email protected]'; my $flag =1 ; my $add_flag = 0; for my $i (0..$#array) { next if ($array[$i] =~ /^\s*$/); if ( $flag == 1 ) { if ($array[$i] =~ /#START HERE/ ) { $flag = 0; } else { next ; } } if (($array[$i] cmp $rec) == 1) { splice @array, $i, 1, $rec; $add_flag = 1; last ; } } if ( $add_flag == 0 ) { my $index = $#array+1; $array[$index] = $rec ; } the recording adding end of file always . I am trying to add to middle or begin or end like aplphbetical order

    Read the article

  • F# - Facebook Hacker Cup - Double Squares

    - by Jacob
    I'm working on strengthening my F#-fu and decided to tackle the Facebook Hacker Cup Double Squares problem. I'm having some problems with the run-time and was wondering if anyone could help me figure out why it is so much slower than my C# equivalent. There's a good description from another post; Source: Facebook Hacker Cup Qualification Round 2011 A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 3^2 + 1^2. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 3^2 + 1^2 (we don't count 1^2 + 3^2 as being different). On the other hand, 25 can be written as 5^2 + 0^2 or as 4^2 + 3^2. You need to solve this problem for 0 = X = 2,147,483,647. Examples: 10 = 1 25 = 2 3 = 0 0 = 1 1 = 1 My basic strategy (which I'm open to critique on) is to; Create a dictionary (for memoize) of the input numbers initialzed to 0 Get the largest number (LN) and pass it to count/memo function Get the LN square root as int Calculate squares for all numbers 0 to LN and store in dict Sum squares for non repeat combinations of numbers from 0 to LN If sum is in memo dict, add 1 to memo Finally, output the counts of the original numbers. Here is the F# code (See code changes at bottom) I've written that I believe corresponds to this strategy (Runtime: ~8:10); open System open System.Collections.Generic open System.IO /// Get a sequence of values let rec range min max = seq { for num in [min .. max] do yield num } /// Get a sequence starting from 0 and going to max let rec zeroRange max = range 0 max /// Find the maximum number in a list with a starting accumulator (acc) let rec maxNum acc = function | [] -> acc | p::tail when p > acc -> maxNum p tail | p::tail -> maxNum acc tail /// A helper for finding max that sets the accumulator to 0 let rec findMax nums = maxNum 0 nums /// Build a collection of combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) let rec combos range = seq { let count = ref 0 for inner in range do for outer in Seq.skip !count range do yield (inner, outer) count := !count + 1 } let rec squares nums = let dict = new Dictionary<int, int>() for s in nums do dict.[s] <- (s * s) dict /// Counts the number of possible double squares for a given number and keeps track of other counts that are provided in the memo dict. let rec countDoubleSquares (num: int) (memo: Dictionary<int, int>) = // The highest relevent square is the square root because it squared plus 0 squared is the top most possibility let maxSquare = System.Math.Sqrt((float)num) // Our relevant squares are 0 to the highest possible square; note the cast to int which shouldn't hurt. let relSquares = range 0 ((int)maxSquare) // calculate the squares up front; let calcSquares = squares relSquares // Build up our square combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) for (sq1, sq2) in combos relSquares do let v = calcSquares.[sq1] + calcSquares.[sq2] // Memoize our relevant results if memo.ContainsKey(v) then memo.[v] <- memo.[v] + 1 // return our count for the num passed in memo.[num] // Read our numbers from file. //let lines = File.ReadAllLines("test2.txt") //let nums = [ for line in Seq.skip 1 lines -> Int32.Parse(line) ] // Optionally, read them from straight array let nums = [1740798996; 1257431873; 2147483643; 602519112; 858320077; 1048039120; 415485223; 874566596; 1022907856; 65; 421330820; 1041493518; 5; 1328649093; 1941554117; 4225; 2082925; 0; 1; 3] // Initialize our memoize dictionary let memo = new Dictionary<int, int>() for num in nums do memo.[num] <- 0 // Get the largest number in our set, all other numbers will be memoized along the way let maxN = findMax nums // Do the memoize let maxCount = countDoubleSquares maxN memo // Output our results. for num in nums do printfn "%i" memo.[num] // Have a little pause for when we debug let line = Console.Read() And here is my version in C# (Runtime: ~1:40: using System; using System.Collections.Generic; using System.Diagnostics; using System.IO; using System.Linq; using System.Text; namespace FBHack_DoubleSquares { public class TestInput { public int NumCases { get; set; } public List<int> Nums { get; set; } public TestInput() { Nums = new List<int>(); } public int MaxNum() { return Nums.Max(); } } class Program { static void Main(string[] args) { // Read input from file. //TestInput input = ReadTestInput("live.txt"); // As example, load straight. TestInput input = new TestInput { NumCases = 20, Nums = new List<int> { 1740798996, 1257431873, 2147483643, 602519112, 858320077, 1048039120, 415485223, 874566596, 1022907856, 65, 421330820, 1041493518, 5, 1328649093, 1941554117, 4225, 2082925, 0, 1, 3, } }; var maxNum = input.MaxNum(); Dictionary<int, int> memo = new Dictionary<int, int>(); foreach (var num in input.Nums) { if (!memo.ContainsKey(num)) memo.Add(num, 0); } DoMemoize(maxNum, memo); StringBuilder sb = new StringBuilder(); foreach (var num in input.Nums) { //Console.WriteLine(memo[num]); sb.AppendLine(memo[num].ToString()); } Console.Write(sb.ToString()); var blah = Console.Read(); //File.WriteAllText("out.txt", sb.ToString()); } private static int DoMemoize(int num, Dictionary<int, int> memo) { var highSquare = (int)Math.Floor(Math.Sqrt(num)); var squares = CreateSquareLookup(highSquare); var relSquares = squares.Keys.ToList(); Debug.WriteLine("Starting - " + num.ToString()); Debug.WriteLine("RelSquares.Count = {0}", relSquares.Count); int sum = 0; var index = 0; foreach (var square in relSquares) { foreach (var inner in relSquares.Skip(index)) { sum = squares[square] + squares[inner]; if (memo.ContainsKey(sum)) memo[sum]++; } index++; } if (memo.ContainsKey(num)) return memo[num]; return 0; } private static TestInput ReadTestInput(string fileName) { var lines = File.ReadAllLines(fileName); var input = new TestInput(); input.NumCases = int.Parse(lines[0]); foreach (var lin in lines.Skip(1)) { input.Nums.Add(int.Parse(lin)); } return input; } public static Dictionary<int, int> CreateSquareLookup(int maxNum) { var dict = new Dictionary<int, int>(); int square; foreach (var num in Enumerable.Range(0, maxNum)) { square = num * num; dict[num] = square; } return dict; } } } Thanks for taking a look. UPDATE Changing the combos function slightly will result in a pretty big performance boost (from 8 min to 3:45): /// Old and Busted... let rec combosOld range = seq { let rangeCache = Seq.cache range let count = ref 0 for inner in rangeCache do for outer in Seq.skip !count rangeCache do yield (inner, outer) count := !count + 1 } /// The New Hotness... let rec combos maxNum = seq { for i in 0..maxNum do for j in i..maxNum do yield i,j }

    Read the article

  • What type of web service should I put together?

    - by Jake
    I want to write a web service using Visual Studio. The service needs to support some type of authentication, and should be able to receive commands via simple HTTP GET requests. The input would only be a method call with some parameters, and the responses will be simple status/error codes. My instinct would be to go with an ASP.NET Web Service, but this isn't an option in C# 4.0 and it makes me wonder if I should be using something that's more up-to-date. I've looked into WCF, but it seems like this requires a running application on the client-side - is there a way to query a WCF host by just accessing a URL? The authentication is also an important piece. Developing my own little authentication system seems like a bad idea - I've read that it's too easy to mess up. What would be the standard way of authenticating with a web service like this? I'd love to look up all of the specifics on this and learn it myself, but I really don't even know where to begin. Some direction would be greatly appreciated!

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Installing a Windows Service from a separate GUI - how to install .config file along with it?

    - by Shaul
    I have written a GUI (call it MyGUI) for ClickOnce deployment on any given client site. That GUI installs and configures a Windows Service (MyService), using the method described here by @Marc Gravell. Here's my code, run from inside MyGUI, which contains a reference to MyService: using (var inst = new AssemblyInstaller(typeof(MyService.Program).Assembly, new string[] { })) { IDictionary state = new Hashtable(); inst.UseNewContext = true; try { if (uninstall) { inst.Uninstall(state); } else { inst.Install(state); inst.Commit(state); } } catch { try { inst.Rollback(state); } catch { } throw; } } Take note of that first line: I'm grabbing the assembly for MyService, and installing that. Now, trouble is, the way I've done the deployment, I'm effectively referencing the service's EXE file from the GUI's app folder. So now the service fires up and starts looking for stuff in the MyService.config file, and can't find it, because it's living in someone else's app folder, with only the GUI's MyGUI.config file present. So, how do I get MyService.config to be available to the service?

    Read the article

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • Visual Studio 2010 randomly unable to debug WCF service.

    - by rossisdead
    I'm running Visual Studio 2010 on a Windows 7 x64 machine, and occasionally VS is giving me the good old "The remote procedure could not be debugged.This usually indicates that debugging has not been enabled on the server" error that a lot of people ask about. My problem, though, is that it seems to only do this randomly(it can be anywhere from a few minutes to a few hours), and after I've made plenty of successful calls to the service already. It doesn't prevent the service from working. It still returns values and doesn't throw any errors. The only difference is that annoying dialog pops up everytime I start to debug my application. I should mention that I'm connecting the WCF service from a WPF application. If I launch the web site the service is part of, I don't get the dialog. A few of the things I've tried that do not work: Killing and restarting the server. Compiling the web server in x86 Enabling tracing, but couldn't find any problems. Is this just a bug in Visual Studio 2010, or is there something I'm missing?

    Read the article

  • Good case for a Null Object Pattern? (Provide some service with a mailservice)

    - by fireeyedboy
    For a website I'm working on, I made an Media Service object that I use in the front end, as well as in the backend (CMS). This Media Service object manipulates media in a local repository (DB); it provides the ability to upload/embed video's and upload images. In other words, website visitors are able to do this in the front end, but administrators of the site are also able to do this in the backend. I'ld like this service to mail the administrators when a visitor has uploaded/embedded a new medium in the frontend, but refrain from mailing them when they upload/embed a medium themself in the backend. So I started wondering whether this is a good case for passing a null object, that mimicks the mail funcionality, to the Media Service in the backend. I thought this might come in handy when they decide the backend needs to have implemented mail functionality as well. In simplified terms I'ld like to do something like this: Frontend: $mediaService = new MediaService( new MediaRepository(), new StandardMailService() ); Backend: $mediaService = new MediaService( new MediaRepository(), new NullMailService() ); How do you feel about this? Does this make sense? Or am I setting myself up for problems down the road?

    Read the article

  • Flash - Http service bound to a button can be used only once ?

    - by SAKIROGLU Koray
    I have a flex project, in one of the screen of my UI, I create a HTTP service that call a "doGet" J2EE servlet through the GET method; and I bind the service call to a button. The service print log to system.out so I know when it is run. The problem I have is, when I click said button the first time, the servlets do what it is supposed to do, and print the stack to system.out, but if I click another time, nothing happens. Any idea what might be the cause ? Here's the flex code (code and service has been generated by Eclipse flex plugin) <mx:Script> <![CDATA[ protected function button_clickHandler(event:MouseEvent):void { LaunchSimulResult.token = simulation.LaunchSimul(); } ]]> </mx:Script> <mx:Canvas> ... <mx:Button label="Simulation" id="button" click="button_clickHandler(event)"/> .. </mx:Canvas> <mx:CallResponder id="LaunchSimulResult"/> <simulation:Simulation id="simulation" fault="Alert.show(event.fault.faultString + '\n' + event.fault.faultDetail)" showBusyCursor="true"/>

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >