Search Results

Search found 28312 results on 1133 pages for 'client side'.

Page 55/1133 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Detecting file upload size on the client side?

    - by DisgruntledGoat
    I'm using PHP for file uploads. In the PHP manual it shows an example using a MAX_FILE_SIZE hidden field, saying that it will detect on the client side (i.e. the browser) whether the file is too large or not. I've just tried the example in Firefox, Chrome and IE and it doesn't work. The file is always uploaded, even if it is way larger than the specified hidden field. Incidentally, if the file is larger than MAX_FILE_SIZE then calling move_uploaded_file doesn't work, so it seems the variable is having an effect server-side, but not client-side.

    Read the article

  • Running isolated Internet Explorer instances side by side? (separate cookie sets)

    - by GJ
    I'm using PAMIE (http://pamie.sourceforge.net/) to automate some testing routines on a client's web site via IE8, and would like to be able to run multiple tests under different user credentials. The site which I'm testing is using cookies to remember the user (without a "remember me" option I can deselect). Therefore, when I run a second instance of IE8 the cookies get shared and I can't log in as a different user. Is there any way to get IE8 to use isolated sets of cookies in each window?

    Read the article

  • Limit NFS block size from server side?

    - by paulw1128
    Is it possible to enforce a maximum rsize/wsize in nfsd? I'm having issues related to IP fragmentation (yes, I'm stuck with NFS-over-UDP, contrary to the warnings in the manpage), and have no practical access to the client mount command (buried in one of many TFTP boot images). http://nfs.sourceforge.net/nfs-howto/ar01s05.html lists a kernel source parameter limiting the maximum block size, but I'm not gong to get away with recompiling the nfsd kernel module so that's not really an option either :-(

    Read the article

  • WCF Stream.Read always returns 0 in client

    - by G_M
    I've spent most of my day trying to figure out why this isn't working. I have a WCF service that streams an object to the client. The client is then supposed to write the file to its disk. But when I call stream.Read(buffer, 0, bufferLength) it always returns 0. Here's my code: namespace StreamServiceNS { [ServiceContract] public interface IStreamService { [OperationContract] Stream downloadStreamFile(); } } class StreamService : IStreamService { public Stream downloadStreamFile() { ISSSteamFile sFile = getStreamFile(); BinaryFormatter bf = new BinaryFormatter(); MemoryStream stream = new MemoryStream(); bf.Serialize(stream, sFile); return stream; } } Service config file: <system.serviceModel> <services> <service name="StreamServiceNS.StreamService"> <endpoint address="stream" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IStreamService" name="BasicHttpEndpoint_IStreamService" contract="SWUpdaterService.ISWUService" /> </service> </services> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IStreamService" transferMode="StreamedResponse" maxReceivedMessageSize="209715200"></binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceThrottling maxConcurrentCalls ="100" maxConcurrentSessions="400"/> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Client: TestApp.StreamServiceRef.StreamServiceClient client = new StreamServiceRef.StreamServiceClient(); try { Stream stream = client.downloadStreamFile(); int bufferLength = 8 * 1024; byte[] buffer = new byte[bufferLength]; FileStream fs = new FileStream(@"C:\test\testFile.exe", FileMode.Create, FileAccess.Write); int bytesRead; while ((bytesRead = stream.Read(buffer, 0, bufferLength)) > 0) { fs.Write(buffer, 0, bytesRead); } stream.Close(); fs.Close(); } catch (Exception e) { Console.WriteLine("Error: " + e.Message); } Client app.config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpEndpoint_IStreamService" maxReceivedMessageSize="209715200" transferMode="StreamedResponse"> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://[server]/StreamServices/streamservice.svc/stream" binding="basicHttpBinding" bindingConfiguration="BasicHttpEndpoint_IStreamService" contract="StreamServiceRef.IStreamService" name="BasicHttpEndpoint_IStreamService" /> </client> </system.serviceModel> (some code clipped for brevity) I've read everything I can find on making WCF streaming services, and my code looks no different than theirs. I can replace the streaming with buffering and send an object that way fine, but when I try to stream, the client always sees the stream as "empty". The testFile.exe gets created, but its size is 0KB. What am I missing?

    Read the article

  • Socket php server not showing messages sent from android client

    - by Mj1992
    Hi I am a newbie in these kind of stuff but here's what i want to do. I am trying to implement a chat application in which users will send their queries from the website and as soon as the messages are sent by the website users.It will appear in the android mobile application of the site owner who will answer their queries .In short I wanna implement a live chat. Now right now I am just simply trying to send messages from android app to php server. But when I run my php script from dreamweaver in chrome the browser keeps on loading and doesn't shows any output when I send message from the client. Sometimes it happened that the php script showed some outputs which I have sent from the android(client).But i don't know when it works and when it does not. So I want to show those messages in the php script as soon as I send those messages from client and vice versa(did not implemented the vice versa for client but help will be appreciated). Here's what I've done till now. php script: <?php set_time_limit (0); $address = '127.0.0.1'; $port = 1234; $sock = socket_create(AF_INET, SOCK_STREAM, 0); socket_bind($sock, $address, $port) or die('Could not bind to address'); socket_listen($sock); $client = socket_accept($sock); $welcome = "Roll up, roll up, to the greatest show on earth!\n? "; socket_write($client, $welcome,strlen($welcome)) or die("Could not send connect string\n"); do{ $input=socket_read($client,1024,1) or die("Could not read input\n"); echo "User Says: \n\t\t\t".$input; if (trim($input) != "") { echo "Received input: $input\n"; if(trim($input)=="END") { socket_close($spawn); break; } } else{ $output = strrev($input) . "\n"; socket_write($spawn, $output . "? ", strlen (($output)+2)) or die("Could not write output\n"); echo "Sent output: " . trim($output) . "\n"; } } while(true); socket_close($sock); echo "Socket Terminated"; ?> Android Code: public class ServerClientActivity extends Activity { private Button bt; private TextView tv; private Socket socket; private String serverIpAddress = "127.0.0.1"; private static final int REDIRECTED_SERVERPORT = 1234; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); bt = (Button) findViewById(R.id.myButton); tv = (TextView) findViewById(R.id.myTextView); try { InetAddress serverAddr = InetAddress.getByName(serverIpAddress); socket = new Socket(serverAddr, REDIRECTED_SERVERPORT); } catch (UnknownHostException e1) { e1.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } bt.setOnClickListener(new OnClickListener() { public void onClick(View v) { try { EditText et = (EditText) findViewById(R.id.EditText01); String str = et.getText().toString(); PrintWriter out = new PrintWriter(new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())),true); out.println(str); Log.d("Client", "Client sent message"); } catch (UnknownHostException e) { tv.setText(e.getMessage()); e.printStackTrace(); } catch (IOException e) { tv.setText(e.getMessage()); e.printStackTrace(); } catch (Exception e) { tv.setText(e.getMessage()); e.printStackTrace(); } } }); } } I've just pasted the onclick button event code for Android.Edit text is the textbox where I am going to enter my text. The ip address and port are same as in php script.

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • What's the best subversion client to use for a working copy on a network share that's being modified by other users without a subversion client?

    - by loupai
    That probably sounds confusing. Here's my situation: I have a software project I'd like to version control with Subversion. The project files are on a network share which is modified by several users. I'd like to version control the directory with caveat that many of the users are not going to be using a SVN client when they add, modify, move, and rename files. I'll be doing all the version control myself along with one or two other users. When I commit the changes with an SVN client I'd assume that all changes made to file, all deletions, renames, etc are intentional. So how do I detect these changes if as user made them without using a client like TortoiseSVN? Can anyone recommend a client that could determine possible renames, deletions, and moved files? Thanks!

    Read the article

  • SFTP - Unable to Overwrite File - "EOF received from remote side"

    - by NateReid
    I am working with a customer to troubleshoot an error they have when trying to overwrite/"PUT" a file to our SFTP site. When the root directory is empty and they try to upload the file there is no problem but when they try to overwrite an existing file this is when the error occurs. The error they receive when trying a put command in Java Caps is: The error is: Batch SFTP eWay error when doing data transfer operation in [PUT()], message=[EOF received from remote side [Unknown cause]].|#] When they use WinSCP or FileZilla to put the file it overwrites fine with no errors. We have tried: Multiple different files Checking their SFTP user permissions Gave full access permissions to "everyone" to the user's root directory in Windows Recreating their user account Ensured no other processes are using/locking the files that are being overwriten We are using Cerberus Professional FTP server software. Any ideas of what else we could try?

    Read the article

  • Listing side projects in a jr. sysadmin resume

    - by Beaming Mel-Bin
    I have many "side-projects" that were not part of my past jobs. Just for example: Configuring web site environment for professors and friends Configuring a Linux box that does the routing, firewall (iptables), backup and file sharing (samba) for my apartment Developing small websites for things as simple as party invites to polling friends. Running my own SMTP server with domain keys, SPF and DNSBL Etc., etc. What would be the appropriate section to mention this? Should I even mention it? Perhaps it's best to just bring it up during the interview. I would especially appreciate the opinion of hiring managers.

    Read the article

  • Exchange 2007: Auto reply message to senders (server side)

    - by Mestika
    I’ve a need to create an auto-reply for some of the users in my organization where, when a person sends an E-mail to e.g. [email protected] is faced with an automatically auto-reply with some message “closed during the holidays. We are back at… etc. etc. etc.”. I’ve tried to create a Transport Rule on our Exchange server but the only option I can find in the actions-window is to reply with a “Bounce message to sender with enhanced status code” but I guess that is not the precise action I’m looking for. How can I set up a server-side auto-reply, apply it to only a fixed number of users in my organization and create a message to the senders (which is outside the organization)?

    Read the article

  • How to implement Comet in database side?

    - by Morgan Cheng
    I have been searched for this question for a long time. How to implement Comet in database side? To support Comet, we'd better have a web server stack that supports asynchronous operation. So, Apache is not a option. There are some open source web server such as tornado can do asynchronous http handling. This is in web server level. In database level, how to make web server know that some event happens in database? There should be a asynchronous way to let web server know that something updated in database. Polling is not a option. Is there any example available?

    Read the article

  • Exchange 2007: Auto reply message to senders (server side)

    - by Mestika
    Hi everyone and marry Christmas, I’ve to create an auto-reply for some of the users in my organization where, when a person sends an E-mail to e.g. [email protected] is faced with an automatically auto-reply with some message “closed during the holidays. We are back at… etc. etc. etc.”. I’ve tried to create a Transport Rule on our Exchange server but the only option I can find in the actions-window is to reply with a “Bounce message to sender with enhanced status code” but I guess that is not the precise action I’m looking for. How can I set up a server-side auto-reply, apply it to only a fixed number of users in my organization and create a message to the senders (which is outside the organization)? Sincerely Mestika

    Read the article

  • ssh: which side is running the SOCKS proxy?

    - by Barry Brown
    When I set up a tunnel using dynamic forwarding (ssh -D), which side is running the SOCKS proxy? That is, is the proxy running on the local end (client) or the remote end (server)? Here's the situation: I want to set up several tunnels chained together using -L. Should the -D tunnel be the last one in the chain or the first one? Edit: I found the answer to the second paragraph on Super User (the -D tunnel should be at the remotest end). But I'd still like to know where the proxy code is running.

    Read the article

  • rsync server side limit bandwidth/connection

    - by c2h2
    In a VOIP application, I have upto 3000 clients rsync audio files from there linux server in a daily, server is placed at a data center (10Mbps in/out bound), the server works as a VOIP sip server running FreeSWITCH (low ping latency should be ensured.) Therefore I would like to have server side control of rsync which controls: Limit total outbound bandwidth. Limit total number of connections. (Reject clients while at max number of connection and let it retry after a specific time frame.) OPTIONAL: list/kill individual connections. Normally I would use ssh + rsync + pem_keys with some extra options, but above requirements are not feasible by simple command lines. Can anyone point me some direction. or show some scripts/tools? I would also probably integrate them and release on github. Thanks!

    Read the article

  • Accessing a storage-side snapshot of a cluster-shared volume

    - by syneticon-dj
    From time to time I am in the situation where I need to get data back from storage-side snapshots of cluster shared volumes. I suppose I just never figured out a way to do it right, so I always needed to: expose the shadow copy as a separate LUN offline the original CSV in the cluster un-expose the LUN carrying the original CSV make sure my cluster nodes have detected the new LUN and no longer list the original one add the volume to the list of cluster volumes, promote it to be a CSV copy off the data I need undo steps 5. - 1. to revert to the original configuration This is quite tedious and requires downtime for the original volume. Is there a better way to do this without involving a separate host outside of the cluster?

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • Problem with Google Calendar API invocation at server side

    - by Raffo
    Hi guys, I have problems with the invocation of the Google Calendar API. I downloaded the library for java and I added as external JAR in eclipse the following files: gdata-core, gdata-calendar, gdata- calendar-meta, gdata-client-meta, gdata-client. Then, I created a the method as it follows: import com.google.gdata.client.calendar.CalendarService; import com.google.gdata.data.calendar.CalendarEntry; import com.google.gdata.data.calendar.CalendarFeed; import com.google.gwt.user.server.rpc.RemoteServiceServlet; public class GCalServImpl extends RemoteServiceServlet implements GCalServ { @Override public String RetrieveCalendars() { // TODO Auto-generated method stub // Create a CalenderService and authenticate try{ CalendarService myService = new CalendarService("taskR"); myService.setUserCredentials(***username***, "***password***"); // Send the request and print the response URL feedUrl = new URL("http://www.google.com/calendar/feeds/default/ allcalendars/full"); CalendarFeed resultFeed = myService.getFeed(feedUrl, CalendarFeed.class); System.out.println("Your calendars:"); System.out.println(); String s = ""; for (int i = 0; i < resultFeed.getEntries().size(); i++) { CalendarEntry entry = resultFeed.getEntries().get(i); s=entry.getTitle().getPlainText(); System.out.println("\t" + s); return s; } }catch(Exception e){ e.printStackTrace(); } return null; } I then call it from the client side doing a basic async invocation. If I try to launch the program I got the following errors: WARNING: Error for /taskr/cal java.lang.NoClassDefFoundError: com/google/gdata/client/calendar/ CalendarService at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) at java.lang.Class.getConstructor0(Class.java:2699) at java.lang.Class.newInstance0(Class.java:326) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java: 428) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java: 339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java: 487) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java: 51) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java: 43) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java: 122) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java: 70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java: 152) at com.google.appengine.tools.development.JettyContainerService $ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java: 152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java: 542) at org.mortbay.jetty.HttpConnection $RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java: 409) at org.mortbay.thread.QueuedThreadPool $PoolThread.run(QueuedThreadPool.java:582) Caused by: java.lang.ClassNotFoundException: com.google.gdata.client.calendar.CalendarService at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:315) at com.google.appengine.tools.development.IsolatedAppClassLoader.loadClass(IsolatedAppClassLoader.java: 151) at java.lang.ClassLoader.loadClass(ClassLoader.java:250) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:398) ... 33 more What can I do??

    Read the article

  • JMS have javax.jms.InvalidDestinationException when client stop working

    - by Tran
    I use JMS for sending requests from a client to a server. My client sends a request to the server. While the server is working with my request, my client stops (network problem) before the server finishes. When the server is finished, it'll return to the client, but the server can't see the client which sent the request to server, at which point, the server will return an exception in log file. The exception is : javax.jms.InvalidDestinationException: Cannot publish to a deleted Destination: temp-queue://ID:PC0092-49463-1344231871819-0:0:9 [^] My question is: what do I need to do in this case? Can I catch or disable this exception? And how can I do it? (Sorry, If my english is not good.)

    Read the article

  • WCF client proxy initialization

    - by 123Developer
    I am consuming a WCF service and created its proxy using the VS 2008 service reference. I am looking for the best pattern to call WCF service method Should I create the client proxy instance every time I call the service method and close the client as soon as I am done with that? When I profiled my client application, I could see that it is taking lot of time to get the Channel while initializing the proxy client Should I use a Singleton pattern for the client proxy so that I can use the only once instance and get rid of the re-initializing overhead? Is there any hidden problem with this approach? I am using .Net framework 3.5 SP1, basicHttp binding with little customization.

    Read the article

  • Spring MVC with a rich client framework

    - by ziggy
    I have several applications that are structured as follows DataComponent WebComponent ThickClientComponent WebServices The DataComponent has all the functionality required to access the application's data so it contains the DAOs and the JPA entities. The other three modules are: WebComponent - A spring MVC application that uses the DataComponent for data acccess ThickClientComponent- A Swing application that uses the DataComponent for data access WebServices - A SOAP based services that also uses the DataComponent. All three projets have the DataComponent as a dependeny in their Maven POM file. I would like to use a rich client framework like RichFaces, icefaces or primefaces as i need to be able to use the rich components are available in rich client frameworks (i.e. trees, panel, drag and drop etc). I have looked around and i cant seem to find an example where a Spring MVC application uses a rich client platform. Is it possible? Are the rich client platforms a framework meaning that i have to use either Spring MVC or the rich client platform but not both? The DataComponent module is spring based.

    Read the article

  • "The provider is not compatible with the version of Oracle client"

    - by psyb0rg
    I just put my asp .net web service on a remote host. The service accesses an oracle db on my local machine. The service worked fine when it was running on localhost but since moving to a remote hos, I get The provider is not compatible with the version of Oracle client at Oracle.DataAccess.Client.OracleInit.Initialize() at Oracle.DataAccess.Client.OracleConnection..cctor() --- End of inner exception stack trace --- at Oracle.DataAccess.Client.OracleConnection..ctor(String connectionString) at.... I know the error is related to the data provider version running on the server/client. In my case, the only dll I referenced in the project was Oracle.DataAccess So how do I go about solving this? Note that I won't be able to change anything on the web host other than my own project. My local machine is running Oracle 11g Thanks.

    Read the article

  • Jboss webservice client error

    - by user309281
    Hi I am getting following error, when a java webservice client written using jboss webservices, tries to invoke a webservice through WSDL URL. The program is executed in java 1.5 VM installed in RHEL Any idea when this kind of exception will pop up? And moreover the IP 192.168.182.20 is not of the system from which the client program executed. javax.xml.ws.soap.SOAPFaultException: [192.168.182.20] is not authorized. at org.jboss.ws.core.jaxws.SOAPFaultHelperJAXWS.getSOAPFaultException(SOAPFaultHelperJAXWS.java:84) at org.jboss.ws.core.jaxws.binding.SOAP11BindingJAXWS.throwFaultException(SOAP11BindingJAXWS.java:107) at org.jboss.ws.core.CommonSOAPBinding.unbindResponseMessage(CommonSOAPBinding.java:579) at org.jboss.ws.core.CommonClient.invoke(CommonClient.java:381) at org.jboss.ws.core.jaxws.client.ClientImpl.invoke(ClientImpl.java:291) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:170) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:150) at $Proxy21.send(Unknown Source)

    Read the article

  • .NET/C# Drizzle database client

    - by FlappySocks
    I am planning to use Drizzle in my next C# Mono app. Since there is no C# client available for Drizzle, I thought I would have a stab at writing my own by converting the Java client, and then making it work with DBLinq. Having seen the Java client, I realise that it's a longer job that I had anticipated, and I don't have the time. Besides the Java client is not all that mature yet. Since there is an official Drizzle C client library (libdrizzle), writing a C# wrapper might be the best solution. Are there any tools available that can assist in generating the code for this?

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >