Search Results

Search found 5259 results on 211 pages for 'interrupt handling'.

Page 170/211 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • WCF Callback Faulted - what happens to the session?

    - by RemotecUk
    Just trying to get my head around what can happen when things go wrong with WCF. I have an implementation of my service contract declared with an InstanceContextMode of PerSession... [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession, ConcurrencyMode = ConcurrencyMode.Multiple)] The calls happen as follows: My client calls the server and calls GetServerUTC() to return the current UTC time of the server. This is a one way call and the server will call the client back when its ready (trivial in this instance to simply return the current time!) The server calls back to the client and for test purposes in the callback implementation on the client I throw an exception. This goes unhandled in the client (for test purposes) and the client crashes and closes down. On the server I handle the faulted event handler on the ICommunicationObject... obj.Faulted += new EventHandler(EventService_Faulted); Questions... Will this kill off the session for the current connection on the server. I presume I am free to do what I want in this method e.g. logging or something, but should I do anything specific here to terminate the session or will WCF handle this? From a best practise view point what should I do when the callback is faulted? Does it mean "something has happened in your client" and thats the end of that or is there something I a missing here? Additionally, are there any other faulted handlers I should be handling. Ive done a lot of reading on WCF and it seems sort of vague on what to do when something goes wrong. At present I am implementing a State Machine on my client which will manage the connection and determine if a user action can happen dependant on if a connection exists to the server - or is this overkill. Any tips would be really appreciated ;)

    Read the article

  • regular expression to remove original message from reply mail using in java ?

    - by ravi ravi
    In my forum, users can reply through email. I am handling mails from their reply. When they are replying the original message getting appended. I want to get only the reply message not the original message. I have to write regular expression for gmail & hotmail. I written regex for gmail as follows : \n.wrote:(?s).--End of Post-- It is removing the original message except date. I want to remove the date also. before removing the original message : " hi 33 On Tue, May 11, 2010 at 4:18 PM, Mmmmm, Rrrrr wrote: The following update has been posted to this discussion: test as user 222 [$MESSAGE_SIGNATURE_HEADER$] --End of Post-- " When I use the above regex it is filtering as follows : " hi 33 On Tue, May 11, 2010 at 4:18 PM, Mmmmm, Rrrrr " Here i want only the actual message 'hi 33' not that date. How can I filter the date using above regex? Also I need regex for Hotmail also. I appreciate for any reply. Thanks in advance.

    Read the article

  • Logging with Quartz.net

    - by Young Ninja
    I will shamelessly state that I have little experience with Log4Net... I only just installed it, but it won't capture log events from Quartz.net, which is a scheduling library. Apparently Quartz.net uses Commons Logging and that needs to be configured to point to my Log4Net settings. Unfortunately, it doesn't seem to work. Help is appreciated: <configSections> ... <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <section name="commonLogging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging"/> </configSections> <!-- Log4net error handling --> <log4net> <appender name="LogFileAppender" type="log4net.Appender.FileAppender"> <param name="File" value="Admin/LabSlice.log" /> <param name="AppendToFile" value="true" /> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c %m%n" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="LogFileAppender" /> </root> </log4net> <!-- Commons logging (Quart.net logs) --> <commonLogging> <logging> <factoryAdapter type="Common.Logging.Log4Net.Log4NetLoggerFactoryAdapter, Common.Logging.Log4net"> <arg key="configType" value="INLINE" /> </factoryAdapter> </logging> </commonLogging>

    Read the article

  • Is it possible to disable user sessions for guest in Joomla 1.5

    - by WebolizeR
    Hi; Is it possible to disable session handling in Joomla 1.5 for guests. I donot use user system in the frontend, i assumed that it's not needed to run queries like below: Site will use APC or Memcache as caching system under heavy load, so it's important for me tHanks for your comments # DELETE FROM jos_session WHERE ( time < '1274709357' ) # SELECT * FROM jos_session WHERE session_id = '70c247cde8dcc4dad1ce111991772475' # UPDATE jos_session SET time='1274710257',userid='0',usertype='',username='',gid='0',guest='1',client_id='0',data='__default|a:8:{s:15:\"session.counter\";i:5;s:19:\"session.timer.start\";i:1274709740;s:18:\"session.timer.last\";i:1274709749;s:17:\"session.timer.now\";i:1274709754;s:22:\"session.client.browser\";s:88:\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3\";s:8:\"registry\";O:9:\"JRegistry\":3:{s:17:\"_defaultNameSpace\";s:7:\"session\";s:9:\"_registry\";a:1:{s:7:\"session\";a:1:{s:4:\"data\";O:8:\"stdClass\":0:{}}}s:7:\"_errors\";a:0:{}}s:4:\"user\";O:5:\"JUser\":19:{s:2:\"id\";i:0;s:4:\"name\";N;s:8:\"username\";N;s:5:\"email\";N;s:8:\"password\";N;s:14:\"password_clear\";s:0:\"\";s:8:\"usertype\";N;s:5:\"block\";N;s:9:\"sendEmail\";i:0;s:3:\"gid\";i:0;s:12:\"registerDate\";N;s:13:\"lastvisitDate\";N;s:10:\"activation\";N;s:6:\"params\";N;s:3:\"aid\";i:0;s:5:\"guest\";i:1;s:7:\"_params\";O:10:\"JParameter\":7:{s:4:\"_raw\";s:0:\"\";s:4:\"_xml\";N;s:9:\"_elements\";a:0:{}s:12:\"_elementPath\";a:1:{i:0;s:74:\"C:\xampp\htdocs\sites\iv.mynet.com\libraries\joomla\html\parameter\element\";}s:17:\"_defaultNameSpace\";s:8:\"_default\";s:9:\"_registry\";a:1:{s:8:\"_default\";a:1:{s:4:\"data\";O:8:\"stdClass\":0:{}}}s:7:\"_errors\";a:0:{}}s:9:\"_errorMsg\";N;s:7:\"_errors\";a:0:{}}s:13:\"session.token\";s:32:\"a2b19c7baf223ad5fd2d5503e18ed323\";}' WHERE session_id='70c247cde8dcc4dad1ce111991772475'

    Read the article

  • Ibatis startBatch() only works with SqlMapClient's own start and commit transactions, not with Sprin

    - by Brian
    Hi, I'm finding that even though I have code wrapped by Spring transactions, and it commits/rolls back when I would expect, in order to make use of JDBC batching when using Ibatis and Spring I need to use explicit SqlMapClient transaction methods. I.e. this does batching as I'd expect: dao.getSqlMapClient().startTransaction(); dao.getSqlMapClient().startBatch(); int i = 0; for (MyObject obj : allObjects) { dao.storeChange(obj); i++; if (i % DB_BATCH_SIZE == 0) { dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().startBatch(); } } dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().commitTransaction(); but if I don't have the opening and closing transaction statements, and rely on Spring to manage things (which is what I want to do!), batching just doesn't happen. Given that Spring does otherwise seem to be handling its side of the bargain regarding transaction management, can anyone advise on any known issues here? (Database is MySQL; I'm aware of the issues regarding its JDBC pseudo-batch approach with INSERT statement rewriting, that's definitely not an issue here)

    Read the article

  • WF performance with new 20,000 persisted workflow instances each month

    - by Nikola Stjelja
    Windows Workflow Foundation has a problem that is slow when doing WF instances persistace. I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish. What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons. I have the following questions: Is this true? Will my performance be crap with that load(given WF persitance speed limitations) How can I solve the problem? We currently have two possible solutions: 1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database. 2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly). I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at [email protected]

    Read the article

  • Java - Save video stream from Socket to File

    - by Alex
    I use my Android application for streaming video from phone camera to my PC Server and need to save them into file on HDD. So, file created and stream successfully saved, but the resulting file can not play with any video player (GOM, KMP, Windows Media Player, VLC etc.) - no picture, no sound, only playback errors. I tested my Android application into phone and may say that in this instance captured video successfully stored on phone SD card and after transfer it to PC played witout errors, so, my code is correct. In the end, I realized that the problem in the video container: data streamed from phone in MP4 format and stored in *.mp4 files on PC, and in this case, file may be incorrect for playback with video players. Can anyone suggest how to correctly save streaming video to a file? There is my code that process and store stream data (without errors handling to simplify): // getOutputMediaFile() returns a new File object DataInputStream in = new DataInputStream (server.getInputStream()); FileOutputStream videoFile = new FileOutputStream(getOutputMediaFile()); int len; byte buffer[] = new byte[8192]; while((len = in.read(buffer)) != -1) { videoFile.write(buffer, 0, len); } videoFile.close(); server.close(); Also, I would appreciate if someone will talk about the possible "pitfalls" in dealing with the conservation of media streams. Thank you, I hope for your help! Alex.

    Read the article

  • IE: Undocumented "cache" attribute defined for input elements?

    - by aefxx
    Hi everybody. I've stumpled upon a strange behavior in IE(6/7/8) that drives me nuts. Given the following markup: <input type="text" value="foo" class="bar" cache="yes" send="no" /> Please note that the cache attribute is set to yes. However IE somehow manages to change the attributes value to cache="cache" when rendering the DOM. So, I wonder, is there an undocumented feature that I'm not aware of? I've googled about an hour now but couldn't find any info on this (not even on MSDN). NOTE I'm aware that adding custom attributes is non-standard compliant and that boolean attributes should be noted as attribute="attribute. Nevertheless I have to cope with these as they were introduced long before I joined the team. These custom attributes are used in conjunction with javascript to provide a more user friendly approach to form handling (and it works out quite well with Firefox/Safari/Opera/Chrome). I know that I could simply convert these custom attributes to x-data attributes that will be introduced with HTML5 but that would take me several hours of extra work - sigh. Hope, I made myself clear. Thanks in advance.

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • log4j vs. System.out.println - logger advantages?

    - by wishi_
    Hi! I'm newly using log4j in a project. A fellow programmer told me that using System.out.println is considered bas style and that log4j is something like standard for logging matters nowadays. We do lots of JUnit testing - System.out stuff turns out to be harder to test. Therefore I began utilizing log4j for a Console controller class, that's just handling commandline parameters. // some logger config org.apache.log4j.BasicConfigurator.configure(); Logger logger = LoggerFactory.getLogger(Console.class); Category cat = Category.getRoot(); Seems to work: logger.debug("String"); Produces: 1 [main] DEBUG project.prototype.controller.Console - String I got two questions regarding this: From my basic understanding using this logger should provide me comfortable options to write a logfile with timestamps - instead of spamming the console - if debug mode is enabled at the logger? Why is System.out.println harder to test? I searched stackoverflow and found a testing recipe. So I wonder what kind of advantage I really get by using log4j.

    Read the article

  • What can cause Windows to unhook a low level (global) keyboard hook?

    - by Davy8
    We have some global keyboard hooks installed via SetWindowsHookEx with WH_KEYBOARD_LL that appear to randomly get unhooked by Windows. We verified that they hook was no longer attached because calling UnhookWindowsHookEx on the handle returns false. (Also verified that it returns true when it was working properly) There doesn't seem to be a consistent repro, I've heard that they can get unhooked due to timeouts or exceptions getting thrown, but I've tried both just letting it sit on a breakpoint in the handling method for over a minute, as well as just throwing a random exception (C#) and it still appears to work. In our callback we quickly post to another thread, so that probably isn't the issue. I've read about solutions in Windows 7 for setting the timeout higher in the registry because Windows 7 is more aggressive about the timeouts apparently (we're all running Win7 here, so not sure if this occurs on other OS's) , but that doesn't seem like an ideal solution. I've considered just having a background thread running to refresh the hook every once in a while, which is hackish, but I don't know of any real negative consequences of doing that, and it seems better than changing a global Windows registry setting. Any other suggestions or solutions? The delegates are not being GC'd since they're static members, which is one cause that I've read about.

    Read the article

  • Best way to fork SVN project with Git

    - by Jeremy Thomerson
    I have forked an SVN project using Git because I needed to add features that they didn't want. But at the same time, I wanted to be able to continue pulling in features or fixes that they added to the upstream version down into my fork (where they don't conflict). So, I have my Git project with the following branches: master - the branch I actually build and deploy from feature_* - feature branches where I work or have worked on new things, which I then merge to master when complete vendor-svn - my local-only git-svn branch that allows me to "git svn rebase" from their svn repo vendor - my local branch that i merge vendor-svn into. then i push this (vendor) branch to the public git repo (github) So, my flow is something like this: git checkout vendor-svn git svn rebase git checkout vendor git merge vendor-svn git push origin vendor Now, the question comes here: I need to review each commit that they made (preferably individually since at this point I'm about twenty commits behind them) before merging them into master. I know that I could run git checkout master; git merge vendor, but this would pull in all changes and commit them, without me being able to see if they conflict with what I need. So, what's the best way to do this? Git seems like a great tool for handling forks of projects since you can pull and push from multiple repos - I'm just not experienced with it enough to know the best way of doing this. Here's the original SVN project I'm talking about: https://appkonference.svn.sourceforge.net/svnroot/appkonference My fork is at github.com/jthomerson/AsteriskAudioKonf (sorry - I couldn't make it a link since I'm a new user here)

    Read the article

  • GLSL shader render to texture not saving alpha value

    - by quadelirus
    I am rendering to a texture using a GLSL shader and then sending that texture as input to a second shader. For the first texture I am using RGB channels to send color data to the second GLSL shader, but I want to use the alpha channel to send a floating point number that the second shader will use as part of its program. The problem is that when I read the texture in the second shader the alpha value is always 1.0. I tested this in the following way: at the end of the first shader I did this: gl_FragColor(r, g, b, 0.1); and then in the second texture I read the value of the first texture using something along the lines of vec4 f = texture2D(previous_tex, pos); if (f.a != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } No pixels in my output are black, whereas if I change the above code to read gl_FragColor(r, g, 0.1, 1.0); //Notice I'm now sending 0.1 for blue and in the second shader vec4 f = texture2D(previous_tex, pos); if (f.b != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } All the appropriate pixels are black. This means that for some reason when I set the alpha value to something other than 1.0 in the first shader and render to a texture, it is still seen as being 1.0 by the second shader. Before I render to texture I glDisable(GL_BLEND); It seems pretty clear to me that the problem has to do with OpenGL handling alpha values in some way that isn't obvious to me since I can use the blue channel in the way I want, and figured someone out there will instantly see the problem.

    Read the article

  • How do I avoid repetition in Java ResourceBundle strings?

    - by Trejkaz
    We had a lot of strings which contained the same sub-string, from sentences about checking the log or how to contact support, to branding-like strings containing the company or product name. The repetition was causing a few issues for ourselves (primarily typos or copy/paste errors) but it also causes issues in that it increases the amount of text our translator has to translate. The solution I came up with went something like this: public class ExpandingResourceBundleControl extends ResourceBundle.Control { public static final ResourceBundle.Control EXPANDING = new ExpandingResourceBundleControl(); private ExpandingResourceBundleControl() { } @Override public ResourceBundle newBundle(String baseName, Locale locale, String format, ClassLoader loader, boolean reload) throws IllegalAccessException, InstantiationException, IOException { ResourceBundle inner = super.newBundle(baseName, locale, format, loader, reload); return inner == null ? null : new ExpandingResourceBundle(inner, loader); } } ExpandingResourceBundle delegates to the real resource bundle but performs conversion of {{this.kind.of.thing}} to look up the key in the resources. Every time you want to get one of these, you have to go: ResourceBundle.getBundle("com/acme/app/Bundle", EXPANDING); And this works fine -- for a while. What eventually happens is that some new code (in our case autogenerated code which was spat out of Matisse) looks up the same resource bundle without specifying the custom control. This appears to be non-reproducible if you write a simple unit test which calls it with and then without, but it occurs when the application is run for real. Somehow the cache inside ResourceBundle ejects the good value and replaces it with the broken one. I am yet to figure out why and Sun's jar files were compiled without debug info so debugging it is a chore. My questions: Is there some way of globally setting the default ResourceBundle.Control that I might not be aware of? That would solve everything rather elegantly. Is there some other way of handling this kind of thing elegantly, perhaps without tampering with the ResourceBundle classes at all?

    Read the article

  • What Language Feature Can You Just Not Live Without?

    - by akdom
    I always miss python's built-in doc strings when working in other languages. I know this may seem odd, but it allows me to cut down significantly on excess comments while still providing a clean description of my code and any interfaces therein. What Language Feature Can You Just Not Live Without? If someone were building a new language and they asked you what one feature they absolutely must include, what would it be? This is getting kind of long, so I figured I'd do my best to summarize: Paraphrased to be language agnostic. If you know of a language which uses something mentioned, please at it in the parenthesis to the right of the feature. And if you have a better format for this list, by all means try it out (if it doesn't seem to work, I'll just roll back). Regular Expressions ~ torial (Perl) Garbage Collection ~ SaaS Developer (Python, Perl, Ruby, Java, .NET) Anonymous Functions ~ Vinko Vrsalovic (Lisp, Python) Arithmetic Operators ~ Jeremy Ross (Python, Perl, Ruby, Java, C#, Visual Basic, C, C++, Pascal, Smalltalk, etc.) Exception Handling ~ torial (Python, Java, .NET) Pass By Reference ~ Chris (Python) Unified String Format WalloWizard (C#) Generics ~ torial (Python, Java, C#) Integrated Query Equivalent to LINQ ~ Vyrotek (C#) Namespacing ~ Garry Shutler () Short Circuit Logic ~ Adam Bellaire ()

    Read the article

  • Working with images (CGImage), exif data, and file icons

    - by Nick
    What I am trying to do (under 10.6).... I have an image (jpeg) that includes an icon in the image file (that is you see an icon based on the image in the file, as opposed to a generic jpeg icon in file open dialogs in a program). I wish to edit the exif metadata, save it back to the image in a new file. Ideally I would like to save this back to an exact copy of the file (i.e. preserving any custom embedded icons created etc.), however, in my hands the icon is lost. My code (some bits removed for ease of reading): // set up source ref I THINK THE PROBLEM IS HERE - NOT GRABBING THE INITIAL DATA CGImageSourceRef source = CGImageSourceCreateWithURL( (CFURLRef) URL,NULL); // snag metadata NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source,0,NULL); // make metadata mutable NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy] autorelease]; // grab exif NSMutableDictionary *EXIFDictionary = [[[metadata objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy] autorelease]; << edit exif >> // add back edited exif [metadataAsMutable setObject:EXIFDictionary forKey:(NSString *)kCGImagePropertyExifDictionary]; // get source type CFStringRef UTI = CGImageSourceGetType(source); // set up write data NSMutableData *data = [NSMutableData data]; CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)data,UTI,1,NULL); //add the image plus modified metadata PROBLEM HERE? NOT ADDING THE ICON CGImageDestinationAddImageFromSource(destination,source,0, (CFDictionaryRef) metadataAsMutable); // write to data BOOL success = NO; success = CGImageDestinationFinalize(destination); // save data to disk [data writeToURL:saveURL atomically:YES]; //cleanup CFRelease(destination); CFRelease(source); I don't know if this is really a question of image handling, file handing, post-save processing (I could use sip), or me just being think (I suspect the last). Nick

    Read the article

  • How do I fix my "Stream closed" error in spring-ws?

    - by mcherm
    I have working code using the spring-ws library to respond to soap requests. I moved this code to a different project (I'm merging projects) and now it is failing. I would like to figure out the reason for the failure. The symptom I get is this: when the HTTP request arrives, spring begins handling the call. Then I get the following exception: org.springframework.ws.soap.saaj.SaajSoapEnvelopeException: Could not access envelope: java.io.IOException: Stream closed; nested exception is javax.xml.soap.SOAPException: java.io.IOException: Stream closed at org.springframework.ws.soap.saaj.SaajSoapMessage.getEnvelope(SaajSoapMessage.java:109) <<more lines that don't matter>> Caused by: java.io.IOException: Stream closed at java.io.PushbackInputStream.ensureOpen(PushbackInputStream.java:57) at java.io.PushbackInputStream.read(PushbackInputStream.java:116) at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown Source) at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source) at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl.parse(Unknown Source) at org.apache.axis.encoding.DeserializationContext.parse(DeserializationContext.java:227) at org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.java:696) ... 30 more Examining it in a debugger, it appears that spring successfully handles HTTP headers, but then when it begins to process the contents of the SOAP message itself, it chokes when reading the very first character of the body. Some googling for the error message suggests that the problem is that a PushbackInputStream which is apparently used for reading from the socket is read twice or perhaps has close() called and then is read afterward. It is happening inside of spring-ws, not my code, and since it worked fine before I moved the code to a new project it must have something to do with versions of spring, or something it uses like axis or xerces. But I can't find any differences in versions of these! Has anyone encountered this error before? Or do you have any suggestions of approaches I could take in troubleshooting this?

    Read the article

  • Make seems to think a prerequisite is an intermediate file, removes it

    - by James
    For starters, this exercise in GNU make was admittedly just that: an exercise rather than a practicality, since a simple bash script would have sufficed. However, it brought up interesting behavior I don't quite understand. I've written a seemingly simple Makefile to handle generation of SSL key/cert pairs as necessary for MySQL. My goal was for make <name> to result in <name>-key.pem, <name>-cert.pem, and any other necessary files (specifically, the CA pair if any of it is missing or needs updating, which leads into another interesting follow-up exercise of handling reverse deps to reissue any certs that had been signed by a missing/updated CA cert). After executing all rules as expected, make seems to be too aggressive at identifying intermediate files for removal; it removes a file I thought would be "safe" since it should have been generated as a prereq to the main rule I'm invoking. (Humbly translated, I likely have misinterpreted make's documented behavior to suit my expectation, but don't understand how. ;-) Edited (thanks, Chris!) Adding %-cert.pem to .PRECIOUS does, of course, prevent the deletion. (I had been using the wrong syntax.) Makefile: OPENSSL = /usr/bin/openssl # Corrected, thanks Chris! .PHONY: clean default: ca clean: rm -I *.pem %: %-key.pem %-cert.pem @# Placeholder (to make this implicit create a rule and not cancel one) Makefile: @# Prevent the catch-all from matching Makefile ca-cert.pem: ca-key.pem $(OPENSSL) req -new -x509 -nodes -days 1000 -key ca-key.pem $@ %-key.pem: $(OPENSSL) genrsa 2048 $@ %-cert.pem: %-csr.pem ca-cert.pem ca-key.pem $(OPENSSL) x509 -req -in $ $@ Output: $ make host1 /usr/bin/openssl genrsa 2048 ca-key.pem /usr/bin/openssl req -new -x509 -nodes -days 1000 -key ca-key.pem ca-cert.pem /usr/bin/openssl genrsa 2048 host1-key.pem /usr/bin/openssl req -new -days 1000 -nodes -key host1-key.pem host1-csr.pem /usr/bin/openssl x509 -req -in host1-csr.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 host1-cert.pem rm host1-csr.pem host1-cert.pem This is driving me crazy, and I'll happily try any suggestions and post results. If I'm just totally noobing out on this one, feel free to jibe away. You can't possibly hurt my feelings. :)

    Read the article

  • Advice on coding Perl to work like PHP

    - by user272273
    I am first and foremost a perl coder, but like many, also code in PHP for client work, especially web apps. I am finding that I am duplicating a lot of my projects in the two languages, but using different paradigms (e.g. for handling cgi input and session data) or functions. What I would like to do is start to code my Perl in a way which is structured more like PHP, so that I a) am keeping one paradigm in my head b) can more quickly port over scripts from one to the other Specifically, I am asking if people could advise how you might do the following in perl? 1) Reproduce the functionality of $_SESSION, $_GET etc. e.g. by wrapping up the param() method of CGI.pm, a session library? 2) Templating library that is similar to PHP I am used to mixing my code and HTML in the PHP convention. e.g. i <h1>HTML Code here</h1> <? print "Hello World\b"; ?> Can anybody advise on which perl templating engine (and possibly configuration) will allow me to code similarly? 3) PHP function library Anybody know of a library for perl which reproduces a lot of the php inbuilt functions?

    Read the article

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • Does WPF break an Entity Framework ObjectContext?

    - by David Veeneman
    I am getting started with Entity Framework 4, and I am getting ready to write a WPF demo app to learn EF4 better. My LINQ queries return IQueryable<T>, and I know I can drop those into an ObservableCollection<T> with the following code: IQueryable<Foo> fooList = from f in Foo orderby f.Title select f; var observableFooList = new ObservableCollection<Foo>(fooList); At that point, I can set the appropriate property on my view model to the observable collection, and I will get WPF data binding between the view and the view model property. Here is my question: Do I break the ObjectContext when I move my foo list to the observable collection? Or put another way, assuming I am otherwise handling my ObjectContext properly, will EF4 properly update the model (and the database)? The reason why I ask is this: NHibernate tracks objects at the collection level. If I move an NHibernate IList<T> to an observable collection, it breaks NHibernate's change tracking mechanism. That means I have to do some very complicated object wrapping to get NHibernate to work with WPF. I am looking at EF4 as a way to dispense with all that. So, to get EF4 working with WPF, is it as simple as dropping my IQueryable<T> results into an ObservableCollection<T>. Does that preserve change-tracking on my EDM entity objects? Thanks for your help.

    Read the article

  • java time check API

    - by ring bearer
    I am sure this was done 1000 times in 1000 different places. The question is I want to know if there is a better/standard/faster way to check if current "time" is between two time values given in hh:mm:ss format. For example, my big business logic should not run between 18:00:00 and 18:30:00. So here is what I had in mind: public static boolean isCurrentTimeBetween(String starthhmmss, String endhhmmss) throws ParseException{ DateFormat hhmmssFormat = new SimpleDateFormat("yyyyMMddhh:mm:ss"); Date now = new Date(); String yyyMMdd = hhmmssFormat.format(now).substring(0, 8); return(hhmmssFormat.parse(yyyMMdd+starthhmmss).before(now) && hhmmssFormat.parse(yyyMMdd+endhhmmss).after(now)); } Example test case: String doNotRunBetween="18:00:00,18:30:00";//read from props file String[] hhmmss = downTime.split(","); if(isCurrentTimeBetween(hhmmss[0], hhmmss[1])){ System.out.println("NOT OK TO RUN"); }else{ System.out.println("OK TO RUN"); } What I am looking for is code that is better in performance in looks in correctness What I am not looking for third-party libraries Exception handling debate variable naming conventions method modifier issues

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • Libraries for developing NCPDP SCRIPT based systems (a standard for e-prescribing)

    - by Kaveh Shahbazian
    What are (based on experiences) best (commercial or open source) libraries for developing NCPDP-based systems? Background: NCPDP (National Council for Prescription Drug Programs) is a not-for-profit, ANSI-accredited, standards development organization. One of it's standards is the SCRIPT Standard for Electronic Prescribing, which allows PHARMACY, PRESCRIBER (i.e. Physician) and PAYERS (patient or more often insurer) communicate. So the SCRIPT standard is about data transmission. Problem: One step in implementing such systems is to develop models for data based on SCRIPT standard. These models should have utilities for serializing/deserializing to/from SCRIPT binary format and SCRIPT XML format (there are two distinct formats here; both must be supported). Here rises the problem (for me at least). To develop this subsystem for handling the model, implementing serializing and deserializing facilities and keep it uptodate with the SCRIPT standard specifications is a lot of work; it needs it's own team and team management issues (to support a standard implementation). So I am looking for a solution to this problem; to keep standard implementation out of the way and focusing on main problems. Thanks to all (Thankyou Freiheit for your hints!) Edit 2: Thanks to all for help! NCPDP (National Council for Prescription Drug Programs) is an standard for e-prescribing. It defines two formats for message transmission: binary and XML. Implementing XML is somehow easier because it is a standard format which in turn gives us more tooling options. The binary format has a very big specification and time-consuming to implement. I did not find an open source solution to work with. So I am looking for commercial alternatives. Edit 1: Please guide me; what's wrong with this question?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >