Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 197/672 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Apache Derby running in Tomcat shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • Apache Derby running within Tomcat causes shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Server Recovery from Denial of Service

    - by JMC
    I'm looking at a server that might be misconfigured to handle Denial of Service. The database was knocked offline after the attack, and was unable to restart itself after it failed to restart when the attack subsided. Details of the Attack: The Attacker either intentionally or unintentionally sent 1000's of search queries using the applications search query url within a couple of seconds. It looks like the server was overwhelmed and it caused the database to log this message: Server Specs: 1.5GB of dedicated memory Are there any obvious mis-configurations here that I'm missing? **mysql.log** 121118 20:28:54 mysqld_safe Number of processes running now: 0 121118 20:28:54 mysqld_safe mysqld restarted 121118 20:28:55 [Warning] option 'slow_query_log': boolean value '/var/log/mysqld.slow.log' wasn't recognized. Set to OFF. 121118 20:28:55 [Note] Plugin 'FEDERATED' is disabled. 121118 20:28:55 InnoDB: The InnoDB memory heap is disabled 121118 20:28:55 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121118 20:28:55 InnoDB: Compressed tables use zlib 1.2.3 121118 20:28:55 InnoDB: Using Linux native AIO 121118 20:28:55 InnoDB: Initializing buffer pool, size = 512.0M InnoDB: mmap(549453824 bytes) failed; errno 12 121118 20:28:55 InnoDB: Completed initialization of buffer pool 121118 20:28:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121118 20:28:55 [ERROR] Plugin 'InnoDB' init function returned error. 121118 20:28:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121118 20:28:55 [ERROR] Unknown/unsupported storage engine: InnoDB 121118 20:28:55 [ERROR] Aborting **ulimit -a** core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 13089 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited **httpd.conf** StartServers 10 MinSpareServers 8 MaxSpareServers 12 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 **my.cnf** innodb_buffer_pool_size=512M # Increase Innodb Thread Concurrency = 2 * [numberofCPUs] + 2 innodb_thread_concurrency=4 # Set Table Cache table_cache=512 # Set Query Cache_Size query_cache_size=64M query_cache_limit=2M # A sort buffer is used for optimizing sorting sort_buffer_size=8M # Log slow queries slow_query_log=/var/log/mysqld.slow.log long_query_time=2 #performance_tweak join_buffer_size=2M **php.ini** memory_limit = 128M post_max_size = 8M

    Read the article

  • Xen domU mem-set issue

    - by Casper Langemeijer
    I'm running into a problem on my xen 4.0.1 server (debian squeeze) My host has 32G of memory, Domain-0 has 2048 M assigned to it. (scaled down with xm mem-set Domain-0 2048) top in Domain-0 confirms this. I created a virtual machine config file (using xen-tools) with the following options: memory = '512' maxmem = '2048' Both host and guest machines are running the standard 2.6.32-5-xen-amd64 debian kernel. 'xm create' creates a virtual machine with 512MB of memory as expected. Then 'xm mem-set domU 1024' will not expand the memory to 1024MB running 'xm mem-set domU 400' does set the memory to about 400MB Then 'xm mem-set domU 1024' will expands the memory back to 512MB Based on this, you would say that xm ignores the maxmem and silently sets maxmem to 512, but in the output of xm top the MAXMEM column reads 2G. the MEM column will not go over 512M. The output of xm list tells another story, it shows 1024 when I 'xm mem-set domU 1024'. I've googled myself all away around the internet for this issue and found that most people don't scale back Domain-0. I know I've seen a bugreport about the issue I'm experiencing, but can't find it anymore. Does anyone see what I'm doing wrong here? Hmm.. I just upgraded my kernel to the one provided by debian backports. The issue has gone.

    Read the article

  • Mysqld shutting down by itself

    - by AJ Naidas
    I'm running a Wordpress Blog that gets medium-high traffic. It is hosted in an Ubuntu Server 2GB Memory 2 Core Processor 40GB SSD Disk, 3TB Transfer. The problem is that MySQL shuts down by itself after an hour or two. I had to restart mysql each and every time this happens. I checked the logs and this is what I found: 140612 6:48:14 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140612 6:48:14 [Note] Plugin 'FEDERATED' is disabled. 140612 6:48:14 InnoDB: The InnoDB memory heap is disabled 140612 6:48:14 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140612 6:48:14 InnoDB: Compressed tables use zlib 1.2.3.4 140612 6:48:14 InnoDB: Initializing buffer pool, size = 1.4G InnoDB: mmap(1502412800 bytes) failed; errno 12 140612 6:48:14 InnoDB: Completed initialization of buffer pool 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool 140612 6:48:14 [ERROR] Plugin 'InnoDB' init function returned error. 140612 6:48:14 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 140612 6:48:14 [ERROR] Unknown/unsupported storage engine: InnoDB 140612 6:48:14 [ERROR] Aborting 140612 6:48:14 [Note] /usr/sbin/mysqld: Shutdown complete judging by this line: 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool I suspect that this is a memory problem, but I would like to hear from the experts here before I conclude. Is this a lack of memory problem? Do you think the value of max_connections in my.cnf (currently 100) is a potential cause and needs increasing? TIA.

    Read the article

  • Computer is dying--what should I be looking for?

    - by Will
    Okay, I'm a bit knowledgeable with pooters and such, but i'm confused. My computer is dying slowly, and I'm not sure what part is causing this. Computer details: Vista, dell machine, intel Q6600, 2.4 Core Duo (quad core), standard memory and drive (unknown manufacturer). Symptoms: I would best describe the symptoms as memory corruption. After a couple days on, I start getting applications crashing or failing to open for a lack of "resources". Sounds are corrupted. Onscreen text gets corrupted; the characters of text are garbled, not the pixels on the screen. Video memory seems untouched as I haven't seen any misplaced pixels. Recently I've lost files on disk. I've also experienced errors reporting a supposed lack of disk space, even though I have fifty gigs free. There was one point where I couldn't get to the POST when booting up. After I cleaned everything (see next) this hasn't happened. Diagnostic steps: First thing I did was clean the case. There was a lot of dust buildup on heatsinks, so I cleaned all that up. No help. Next, I disconnected and reconnected everything, from power cables to memory (did not reseat cpu). No change. Last, I ran the standard vista memory diagnostics and ran checkdisk. Both reported no errors found. I have not run any POST tests, now that I think about it. I'm at a loss at this point. Disk appears fine, memory too. I'd expect motherboard issues to result in the thing not booting up, yet it does every time. What should I be looking at? What more can I do?

    Read the article

  • Entering and retrieving data from SQLite for an android List View

    - by Infiniti Fizz
    Hi all, I started learning android development a few weeks ago and have gone through the developer.android.com tutorials etc. But now I have a problem. I'm trying to create an app which tracks the usage of each installed app. Therefore I'm pulling the names of all installed apps using the PackageManager and then trying to put them into an SQLite database table. I am using the Notepad Tutorial SQLite implementation but I'm running into some problems that I have tried for days to solve. I have 2 classes, the DBHelper class and the actual ListActivity class. For some reason the app force closes when I try and run my fillDatabase() function which gets all the app names from the PackageManager and tries to put them into the database: private void fillDatabase() { PackageManager manager = this.getPackageManager(); List<ApplicationInfo> appList = manager.getInstalledApplications(0); for(int i = 0; i < appList.size(); i++) { mDbHelper.addApp(manager.getApplicationLabel(appList.get(i)).toString(), 0); } } addApp() is a function defined in my AppsDbHelper class and looks as follows: public long createApp(String name, int usage) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_NAME, name); initialValues.put(KEY_USAGE, usage); return mDb.insert(DATABASE_TABLE, null, initialValues); } The database create is defined as follows: private static final String DATABASE_CREATE = "create table notes (_id integer primary key autoincrement, " + "title text not null, usage integer not null);"; I have commented out all statements that follow fillDatabase(); in the onCreate() method of the ListActivity and so know that it is definetely the problem but I don't know why. I am taking the appName and putting it into the KEY_NAME field of the row and putting 0 into the KEY_USAGE field of the row (because initially, my app will default the usage of each app to 0 (not used yet)). If my addApp() function doesn't take the usage and just puts KEY_NAME into the ContentValues and into the database, it seems to work fine, but I want a column for usage. Any ideas why it is not working? Have I overlooked something? Thanks for your time, InfinitiFizz

    Read the article

  • WPF MVVM UserControl Binding "Container", dispose to avoid memory leak.

    - by user178657
    For simplicity. I have a window, with a bindable usercontrol <UserControl Content="{Binding Path = BindingControl, UpdateSourceTrigger=PropertyChanged}"> I have two user controls, ControlA, and ControlB, Both UserControls have their own Datacontext ViewModel, ControlAViewModel and ControlBViewModel. Both ControlAViewModel and ControlBViewModel inh. from a "ViewModelBase" public abstract class ViewModelBase : DependencyObject, INotifyPropertyChanged, IDisposable........ Main window was added to IoC... To set the property of the Bindable UC, i do ComponentRepository.Resolve<MainViewWindow>().Bindingcontrol= new ControlA; ControlA, in its Datacontext, creates a DispatcherTimer, to do "somestuff".. Later on., I need to navigate elsewhere, so the other usercontrol is loaded into the container ComponentRepository.Resolve<MainViewWindow>().Bindingcontrol= new ControlB If i put a break point in the "someStuff" that was in ControlA's datacontext. The DispatcherTimer is still running... i.e. loading a new usercontrol into the bindable Usercontrol on mainwindow does not dispose/close/GC the DispatcherTimer that was created in the DataContext View Model... Ive looked around, and as stated by others, dispose doesnt get called because its not supposed to... :) Not all my usercontrols have DispatcherTimer, just a few that need to do some sort of "read and refresh" updates./. Should i track these DispatcherTimer objects in the ViewModelBase that all Usercontrols inh. and manually stop/dispose them everytime a new usercontrol is loaded? Is there a better way?

    Read the article

  • In Haskell, will calling length on a Lazy ByteString force the entire string into memory?

    - by me2
    I am reading a large data stream using lazy bytestrings, and want to know if at least X more bytes is available while parsing it. That is, I want to know if the bytestring is at least X bytes long. Will calling length on it result in the entire stream getting loaded, hence defeating the purpose of using the lazy bytestring? If yes, then the followup would be: How to tell if it has at least X bytes without loading the entire stream? EDIT: Originally I asked in the context of reading files but understand that there are better ways to determine filesize. Te ultimate solution I need however should not depend on the lazy bytestring source.

    Read the article

  • How to write mmap input memory to O_DIRECT output file?

    - by Friedrich
    why doesn't following pseudo-code work (O_DIRECT results in EFAULT) in_fd = open("/dev/mem"); in_mmap = mmap(in_fd); out_fd = open("/tmp/file", O_DIRECT); write(out_fd, in_mmap, PAGE_SIZE); while following does (no O_DIRECT) in_fd = open("/dev/mem"); in_mmap = mmap(in_fd); out_fd = open("/tmp/file"); write(out_fd, in_mmap, PAGE_SIZE); I guess it's something with virtual kernel pages to virtual user pages, which cannot be translated in the write call? Best regards, Friedrich

    Read the article

  • App or apps like NetLimiter Monitor for Linux

    - by Nathaniel
    I find NetLimiter Monitor quite handy on Windows both for the realtime per-process bandwidth monitoring and the recorded data usage. What tool or tools can I use to replicate this functionality in Linux? I am okay using two separate apps for the bandwidth and the usage monitoring, but the usage monitoring is a must have.

    Read the article

  • If I want to play the same sound 10 times per second, must I have 10 copies of that sound in memory?

    - by mystify
    I have a sound that needs to get played 10 times per second. The sound is 1 second long. So it does overlap like 10 times. However, as far as I understand the Finch sound library, I would need 10 different instances of a sound in place so that I can play it 10 times at almost the same time. When I have just one instance, the sound would stop and play from the beginning on every iteration, but not overlap with itself. How to do that?

    Read the article

  • Memory allocation in detached NSThread to load an NSDictionary in background?

    - by mobibob
    I am trying to launch a background thread to retrieve XML data from a web service. I developed it synchronously - without threads, so I know that part works. Now I am ready to have a non-blocking service by spawning a thread to wait for the response and parse. I created an NSAutoreleasePool inside the thread and release it at the end of the parsing. The code to spawn and the thread are as follows: Spawn from main-loop code: . . [NSThread detachNewThreadSelector:@selector(spawnRequestThread:) toTarget:self withObject:url]; . . Thread (inside 'self'): -(void) spawnRequestThread: (NSURL*) url { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; parser = [[NSXMLParser alloc] initWithContentsOfURL:url]; [self parseContentsOfResponse]; [parser release]; [pool release]; } The method parseContentsOfResponse fills an NSMutableDictionary with the parsed document contents. I would like to avoid moving the data around a lot and allocate it back in the main-loop that spawned the thread rather than making a copy. First, is that possible, and if not, can I simply pass in an allocated pointer from the main thread and allocate with 'dictionaryWithDictionary' method? That just seems so inefficient. Are there perferred designs?

    Read the article

  • How can JVM arguments be passed to apps started through java webstart in MacOS?

    - by siva
    We have a application which is triggered from browser. This application consumes around 800 mb of memory. This works perfectly when invoked from any browsers in windows OS. The same application when triggered from MacOS throws an out of memory exception which occurs when the application is short of memory. Is there any way to increase the memory allocated for apps running in mac os environment. Also please let me know how JVM arguments can be passed to apps started through java webstart in macOS.

    Read the article

  • C# How to perform a live xslt transformation on an in memory object?

    - by JL
    I have a function that takes 2 parameters : 1 = XML file, 2 = XSLT file, then performs a transformation and returns the resulting HTML. Here is the function: /// <summary> /// Will apply an XSLT style to any XML file and return the rendered HTML. /// </summary> /// <param name="xmlFileName"> /// The file name of the XML document. /// </param> /// <param name="xslFileName"> /// The file name of the XSL document. /// </param> /// <returns> /// The rendered HTML. /// </returns> public string TransformXml(string xmlFileName, string xslFileName) { var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); var xslt = new System.Xml.Xsl.XslCompiledTransform(); xslt.Load(xslFileName); var stm = new MemoryStream(); xslt.Transform(xd, null, stm); stm.Position = 1; var sr = new StreamReader(stm); xtr.Close(); return sr.ReadToEnd(); } I want to change the function not to accept a file for the XML, but instead just an object. The object is exactly compatible with the xslt, if it was serialized to file. But I don't want to have to serialize it to a file first. So to recap : keep the xslt coming from a file, but the xml input should an object I pass and would like to generate the xml from without any file system interaction.

    Read the article

  • Update image in ASP.NET page from memory without refreshing the page.

    - by ZeeMan
    I have a command line C# server and an ASP.NET client, they both communicate via .NET Remoting. The server is sending the client still images grabbed from a webcam at say 2 frames a second. I can pass the images down to the client via a remote method call and the image ends up in this client method: public void Update(System.Drawing.Image currentFrame) { // I need to display the currentFrame on my page. } How do i display the image on the a page without saving the image to the hard disc? If it was a winForms app i could pass currentFrame to a picturebox ie. picturebox.Image = currentFrame. The above Update method gets fired at least 2 times a second. If the client was a winForms app i could simply put picturebox.Image = currentFrame and the current frame on the screen would automatically get updated. How do i achieve the same effect with ASP.NET? Will the whole page need to be reloaded etc?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >