Search Results

Search found 4432 results on 178 pages for 'fail'.

Page 140/178 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • Old dll.config problem !

    - by user313421
    Since 2005 as I googled it's a problem for who needs to read the configuration of an assembly from it's config file "*.dll.config" and Microsoft didn't do anything yet. Story: If you try to read a setting from a class library (plug-in) you fail. Instead the main application domain (EXE which is using the plug-in) config is read and because probably there's not such a config your plug-in will use default setting which is hard-coded when you create it's settings for first time. Any change to .dll.config wouldn't see by your plug-in and you wonder why it's there! If you want to replace it and start searching you may find something like this: http://stackoverflow.com/questions/594298/c-dll-config-file But just some ideas and one line code. A good replacement for built-in config shouldn't read from file system each time we need a config value, so we can store them in memory; Then what if user changes config file ? we need a FileSystemWatcher and we need some design like singleton ... and finally we are at the same point configuration of .NET is except our one's working. It seems MS did everything but forgot why they built the ".dll.config". Since no DLL is gonna execute by itself, they are referenced from other apps (even if used in web) and so why there's such a "*.dll.config" file ? I'm not gonna argue if it's good to have multiple config files or not. It's my design (plug-able components). Finally { After these years, is there any good practice such as a custom setting class to add in each assemly and read from it's own config file ? }

    Read the article

  • Issues with intellisense, references, and builds in Visual Studio 2008

    - by goober
    Hoping you can help me -- the strangest thing seems to have happened with my VS install. System config: Windows 7 Pro x64, Visual Studio 2008 SP1, C#, ASP.NET 3.5. I have two web site projects in a solution. I am referencing NUnit / NHibernate (did this by right-clicking on the project and selecting "Add Reference". I've done this for several projects in the past). Things were working fine but recently stopped working and I can't figure out why. Intellisense completely disappears for any files in my App_Code directory, and none of the references are recognized (they are recognized by any file in the root directory of the web site project. Additionally, pretty simple commands like the following (in Page_Load) fail (assume TextBox1 is definitely an element on the page): if (Page.IsPostBack) { str test1; test1 = TextBox1.Text; } It says that all the page elements are null or that it can't access them. At first I thought it was me, but due to the combination of issues, it seems to be Visual Studio itself. I've tried clearing the temp directories & rebuilding the solution. I've also tried tools -- options -- text editor settings to ensure intellisense is turned on. I'd appreciate any help you can give!

    Read the article

  • best practice to persist classes model

    - by Yaron Naveh
    My application contains a set of model classes. e.g. Person, Department... The user changes values for instances of these classes in the UI and the classes are persisted to my "project" file. Next time the user can open and edit the project. Next version of my product may change the model classes drastically. It will still need to open existing projects files (I will know how to handle missing data). How is it best to persist my model classes to the project file? The easiest way to persist classes is Data contract serialization. However it will fail on breaking changes (I expect to have such). How to handle this? use some other persistence, e.g. name-value collection or db which is more tolerance ship a "project converter" application to migrate old projects. This requires to either ship with both old and new models or to manipulate xml, which is best?

    Read the article

  • Object does not exist after constructor?

    - by openbas
    Hello, I have a constructor that looks like this (in c++): Interpreter::Interpreter() { tempDat == new DataObject(); tempDat->clear(); } the constructor of dataObject does absolutely nothing, and clear does this: bool DataObject::clear() { //clear the object if (current_max_id > 0) { indexTypeLookup.clear(); intData.clear(); doubleData.clear(); current_max_id = 0; } } Those members are defined as follows: std::map<int, int> indexTypeLookup; std::map<int, int> intData; std::map<int, double> doubleData; Now the strange thing is that I'm getting a segfault on tempDat-clear(); gdb says tempDat is null. How is that possible? The constructor of tempDat cannot fail, it looks like this: DataObject::DataObject() : current_max_id(0) { } I know there are probably better way's of making such a data structure, but I really like to know where this segfault problem is coming from..

    Read the article

  • Socket.Recieve Failing When Multithreaded

    - by Qua
    The following piece of code runs fine when parallelized to 4-5 threads, but starts to fail as the number of threads increase somewhere beyond 10 concurrentthreads int totalRecieved = 0; int recieved; StringBuilder contentSB = new StringBuilder(4000); while ((recieved = socket.Receive(buffer, SocketFlags.None)) > 0) { contentSB.Append(Encoding.ASCII.GetString(buffer, 0, recieved)); totalRecieved += recieved; } The Recieve method returns with zero bytes read, and if I continue calling the recieve method then I eventually get a 'An established connection was aborted by the software in your host machine'-exception. So I'm assuming that the host actually sent data and then closed the connection, but for some reason I never recieved it. I'm curious as to why this problem arises when there are a lot of threads. I'm thinking it must have something to do with the fact that each thread doesn't get as much execution time and therefore there are some idle time for the threads which causes this error. Just can't figure out why idle time would cause the socket not to recieve any data.

    Read the article

  • Why does first call to java.io.File.createTempFile(String,String,File) take 5 seconds on Citrix?

    - by Ben Roling
    While debugging slow startup of an Eclipse RCP app on a Citrix server, I came to find out that java.io.createTempFile(String,String,File) is taking 5 seconds. It does this only on the first execution and only for certain user accounts. Specifically, I am noticing it Citrix anonymous user accounts. I have not tried many other types of accounts, but this behavior is not exhibited with an administrator account. Also, it does not matter if the user has access to write to the given directory or not. If the user does not have access, the call will take 5 seconds to fail. If they do have access, the call with take 5 seconds to succeed. This is on a Windows 2003 Server. I've tried Sun's 1.6.0_16 and 1.6.0_19 JREs and see the same behavior. I googled a bit expecting this to be some sort of known issue, but didn't find anything. It seems like someone else would have had to have run into this before. The Eclipse Platform uses File.createTempFile() to test various directories to see if they are writeable during initialization and this issue adds 5 seconds to the startup time of our application. I imagine somebody has run into this before and might have some insight. Here is sample code I executed to see that it is indeed this call that is consuming the time. I also tried it with a second call to createTempFile and notice that subsequent calls return nearly instantaneously. public static void main(final String[] args) throws IOException { final File directory = new File(args[0]); final long startTime = System.currentTimeMillis(); File file = null; try { file = File.createTempFile("prefix", "suffix", directory); System.out.println(file.getAbsolutePath()); } finally { System.out.println(System.currentTimeMillis() - startTime); if (file != null) { file.delete(); } } } Sample output of this program is the following: C:\java.exe -jar filetest.jar C:/Temp C:\Temp\prefix8098550723198856667suffix 5093

    Read the article

  • Servlet receiving data both in ISO-8859-1 and UTF-8. How to URL-decode?

    - by AJPerez
    I've a web application (well, in fact is just a servlet) which receives data from 3 different sources: Source A is a HTML document written in UTF-8, and sends the data via <form method="get">. Source B is written in ISO-8859-1, and sends the data via <form method="get">, too. Source C is written in ISO-8859-1, and sends the data via <a href="http://my-servlet-url?param=value&param2=value2&etc">. The servlet receives the request params and URL-decodes them using UTF-8. As you can expect, A works without problems, while B and C fail (you can't URL-decode in UTF-8 something that's encoded in ISO-8859-1...). I can make slight modifications to B and C, but I am not allowed to change them from ISO-8859-1 to UTF-8, which would solve all the problems. In B, I've been able to solve the problem by adding accept-charset="UTF-8" to the <form>. So the <form> sends the data in UTF-8 even with the page being ISO. What can I do to fix C? Alternatively, is there any way to determine the charset on the servlet, so I can call URL-decode with the right encoding in each case?

    Read the article

  • Using Pastebin API in Node.js

    - by wiill
    I've been trying to post a paste to Pastebin in Node.js, but it appears that I'm doing it wrong. I'm getting a Bad API request, invalid api_option, however I'm clearly setting the api_option to paste like the documentation asks for. var http = require('http'); var qs = require('qs'); var query = qs.stringify({ api_option: 'paste', api_dev_key: 'xxxxxxxxxxxx', api_paste_code: 'Awesome paste content', api_paste_name: 'Awesome paste name', api_paste_private: 1, api_paste_expire_date: '1D' }); var req = http.request({ host: 'pastebin.com', port: 80, path: '/api/api_post.php', method: 'POST', headers: { 'Content-Type': 'multipart/form-data', 'Content-Length': query.length } }, function(res) { var data = ''; res.on('data', function(chunk) { data += chunk; }); res.on('end', function() { console.log(data); }); }); req.write(query); req.end(); console.log(query) confirms that the string is well encoded and that api_option is there and set to paste. Now, I've been searching forever on possible causes. I also tried setting the encoding on the write req.write(query, 'utf8') because the Pastebin API mentions that the POST must be UTF-8 encoded. I rewrote the thing over and over and re-consulted the Node HTTP documentation many times. I'm pretty sure I completely missed something here, because I don't see how this could fail. Does anyone have an idea of what I have done wrong?

    Read the article

  • SSIS - Skip Missing Files

    - by Greg
    I have a SSIS 2008 package that calls about 10 other SSIS packages (legacy issues, don't ask). Each of those child packages loads a specific file into a table. But sometimes one or more of these input files will be missing. How can I let a child package fail (because a file is missing) but let the rest of the parent package keep on running? I've tried increasing the maximum error count on the parent package, the tasks in the parent package that call each child, and in the child package itself. None of that seemed to make any difference. I still get this error when I run it with a file missing: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. Edit: failpackageonfailure and faulparentonfailure are already all set to false everywhere.

    Read the article

  • AM I using DO WHILE NOT and EOF in VBscript Properly

    - by Derek Drummond
    This block of code is causing my page to fail to load. When I comment out the line `DO WHILE NOT Rs.EOF the page loads properly. SQL_Command_String = "SELECT * FROM Seminars WHERE [SeminarID] = 5 ORDER BY DESC" Rs = SQLConnection.Execute(SQL_Command_String) DO WHILE NOT Rs.EOF file1 = "./seminars/" & seminar_type & "/" & seminar_year & "/" & Rs("Date") & "-" & Rs("Year") & "_" & Rs("Last") & ".pdf" file2 = "./seminars/" & seminar_type & "/" & seminar_year & "/" & Rs("Date") & "-" & seminar_year & "_" & Rs("Last") & "(handouts).pdf" file3 = "./seminars/" & seminar_type & "/" & seminar_year & "/" & Rs("Date") & "-" & seminar_year & "_" & Rs("Last") & "_Flyer.pdf" Rs.MoveNext loop Is the SQL_Command_String an invalid SQL Command or is my reader not being used properly when I try to build that specific file path e.g. Rs("Date")? Is there an option where I could do something like: DO WHILE RS.hasNextLine

    Read the article

  • Uploading a file fails under WordPress

    - by Ash
    I'm using WordPress and I'm following W3's guide for uploading a file: HTML code: <html> <body> <form action="upload_file.php" method="post" enctype="multipart/form-data"> <label for="file">Filename:</label> <input type="file" name="file" id="file" /> <br /> <input type="submit" name="submit" value="Submit" /> </form> </body> </html> PHP code (upload_file.php): <?php if ($_FILES["file"]["error"] > 0) { echo "Error: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Stored in: " . $_FILES["file"]["tmp_name"]; } ?> The HTML code is pasted in a PHP page template and the PHP file under the WP installation directory under www. The problem is when I submit the file I get Error: 1. If I remark the "if" part of the PHP code and leave the "else" part I get: Upload: IMG_4258.JPG Type: Size: 0 Kb Stored in: So at least I know the PHP code is running. But what's causing it to fail? Is there a problem with the code or is WordPress meddling with the process?

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • C# functional quicksort is failing

    - by Rubys
    I'm trying to implement quicksort in a functional style using C# using linq, and this code randomly works/doesn't work, and I can't figure out why. Important to mention: When I call this on an array or list, it works fine. But on an unknown-what-it-really-is IEnumerable, it goes insane (loses values or crashes, usually. sometimes works.) The code: public static IEnumerable<T> Quicksort<T>(this IEnumerable<T> source) where T : IComparable<T> { if (!source.Any()) yield break; var pivot = source.First(); var sortedQuery = source.Skip(1).Where(a => a.CompareTo(source.First()) <= 0).Quicksort() .Concat(new[] { pivot }) .Concat(source.Skip(1).Where(a => a.CompareTo(source.First()) > 0).Quicksort()); foreach (T key in sortedQuery) yield return key; } Can you find any faults here that would cause this to fail?

    Read the article

  • Using jQuery with Windows 8 Metro JavaScript App causes security error

    - by patridge
    Since it sounded like jQuery was an option for Metro JavaScript apps, I was starting to look forward to Windows 8 dev. I installed Visual Studio 2012 Express RC and started a new project (both empty and grid templates have the same problem). I made a local copy of jQuery 1.7.2 and added it as a script reference. <!-- SomeTestApp references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/jquery-1.7.2.js"></script> <script src="/js/default.js"></script> Unfortunately, as soon as I ran the resulting app it tosses out a console error: HTML1701: Unable to add dynamic content ' a' A script attempted to inject dynamic content, or elements previously modified dynamically, that might be unsafe. For example, using the innerHTML property to add script or malformed HTML will generate this exception. Use the toStaticHTML method to filter dynamic content, or explicitly create elements and attributes with a method such as createElement. For more information, see http://go.microsoft.com/fwlink/?LinkID=247104. I slapped a breakpoint in a non-minified version of jQuery and found the offending line: div.innerHTML = " <link/><table></table><a href='/a' style='top:1px;float:left;opacity:.55;'>a</a><input type='checkbox'/>"; Apparently, the security model for Metro apps forbids creating elements this way. This error doesn't cause any immediate issues for the user, but given its location, I am worried it will cause capability-discovery tests in jQuery to fail that shouldn't. I definitely want jQuery $.Deferred for making just about everything easier. I would prefer to be able to use the selector engine and event handling systems, but I would live without them if I had to. How does one get the latest jQuery to play nicely with Metro development?

    Read the article

  • Is there a way to programatically check dependencies of an EXE?

    - by Mason Wheeler
    I've got a certain project that I build and distribute to users. I have two build configurations, Debug and Release. Debug, obviously, is for my use in debugging, but there's an additional wrinkle: the Debug configuration uses a special debugging memory manager, with a dependency on an external DLL. There's been a few times when I've accidentally built and distributed an installer package with the Debug configuration, and it's then failed to run once installed because the users don't have the special DLL. I'd like to be able to keep that from happening in the future. I know I can get the dependencies in a program by running Dependency Walker, but I'm looking for a way to do it programatically. Specifically, I have a way to run scripts while creating the installer, and I want something I can put in the installer script to check the program and see if it has a dependency on this DLL, and if so, cause the installer-creation process to fail with an error. I know how to create a simple CLI program that would take two filenames as parameters, and could run a DependsOn function and create output based on the result of it, but I don't know what to put in the DependsOn function. Does anyone know how I'd go about writing it?

    Read the article

  • forcing stack w/i 32bit when -m64 -mcmodel=small

    - by chaosless
    have C sources that must compile in 32bit and 64bit for multiple platforms. structure that takes the address of a buffer - need to fit address in a 32bit value. obviously where possible these structures will use natural sized void * or char * pointers. however for some parts an api specifies the size of these pointers as 32bit. on x86_64 linux with -m64 -mcmodel=small tboth static data and malloc()'d data fit within the 2Gb range. data on the stack, however, still starts in high memory. so given a small utility _to_32() such as: int _to_32( long l ) { int i = l & 0xffffffff; assert( i == l ); return i; } then: char *cp = malloc( 100 ); int a = _to_32( cp ); will work reliably, as would: static char buff[ 100 ]; int a = _to_32( buff ); but: char buff[ 100 ]; int a = _to_32( buff ); will fail the assert(). anyone have a solution for this without writing custom linker scripts? or any ideas how to arrange the linker section for stack data, would appear it is being put in this section in the linker script: .lbss : { *(.dynlbss) *(.lbss .lbss.* .gnu.linkonce.lb.*) *(LARGE_COMMON) } thanks!

    Read the article

  • Unit testing, mocking - simple case: Service - Repository

    - by rafek
    Consider a following chunk of service: public class ProductService : IProductService { private IProductRepository _productRepository; // Some initlization stuff public Product GetProduct(int id) { try { return _productRepository.GetProduct(id); } catch (Exception e) { // log, wrap then throw } } } Let's consider a simple unit test: [Test] public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() { var product = EntityGenerator.Product(); _productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product); Product returnedProduct = _productService.GetProduct(product.Id); Assert.AreEqual(product, returnedProduct); _productRepositoryMock.VerifyAll(); } At first it seems that this test is ok. But let's change our service method a little bit: public Product GetProduct(int id) { try { var product = _productRepository.GetProduct(id); product.Owner = "totallyDifferentOwner"; return product; } catch (Exception e) { // log, wrap then throw } } How to rewrite a given test that it'd pass with the first service method and fail with a second one? How do you handle this kind of simple scenarios? HINT: A given test is bad coz product and returnedProduct is actually the same reference.

    Read the article

  • How do you verify that 2 copies of a VB 6 executable came from the same code base?

    - by Tim Visher
    I have a program under version control that has gone through multiple releases. A situation came up today where someone had somehow managed to point to an old copy of the program and thus was encountering bugs that have since been fixed. I'd like to go back and just delete all the old copies of the program (keeping them around is a company policy that dates from before version control was common and should no longer be necessary) but I need a way of verifying that I can generate the exact same executable that is better than saying "The old one came out of this commit so this one should be the same." My initial thought was to simply MD5 hash the executable, store the hash file in source control, and be done with it but I've come up against a problem which I can't even parse. It seems that every time the executable is generated (method: Open Project. File Make X.exe) it hashes differently. I've noticed that Visual Basic messes with files every time the project is opened in seemingly random ways but I didn't think that would make it into the executable, nor do I have any evidence that that is indeed what's happening. To try to guard against that I tried generating the executable multiple times within the same IDE session and checking the hashes but they continued to be different every time. So that's: Generate Executable Generate MD5 Checksum: md5sum X.exe > X.md5 Verify MD5 for current executable: md5sum -c X.md5 Generate New Executable Verify MD5 for new executable: md5sum -c X.md5 Fail verification because computed checksum doesn't match. I'm not understanding something about either MD5 or the way VB 6 is generating the executable but I'm also not married to the idea of using MD5. If there is a better way to verify that two executables are indeed the same then I'm all ears. Thanks in advance for your help!

    Read the article

  • Why do I get "mysql_query(): supplied argument is not a valid"

    - by Brian Ojeda
    Why do I get a "mysql_query(): supplied argument is not a valid" for the first... $r = mysql_query($q, $connection); In the following code... $bId = trim($_POST['bId']); $title = trim($_POST['title']); $story = trim($_POST['story']); $q = "SELECT * "; $q .= "FROM " . DB_NAME . ".`blog` "; $q .= "WHERE `blog`.`id` = {$bId}"; $r = mysql_query($q, $connection); //confirm_query($r); if (mysql_num_rows($r) == 1) { $q = "UPDATE " . DB_NAME . ".`blog` SET `title` = '{$title}', `story` = '{$story}' WHERE `id` = {$bId}"; $r = mysql_query($q, $connection); if (mysql_affected_rows() == 1) { //Successful $data['success'] = true; $date['errors'] = false; $date['message'] = "You are the Greatest!"; } else { //Fail $data['success'] = false; $data['error'] = true; $date['message'] = "You can't do it fool!"; } } I also get an "mysql_num_rows(): supplied argument is not a valid MySQL result resource" error too. Side notes: I am using 1&1 Hosting (worst hosting ever), custom .htaccess file with one line text to enable PHP 5.2 (only why with 1&1 Hosting).

    Read the article

  • Show values in TDropDownList in PRADO.

    - by Muhammad Sajid
    I m new to PRADO, I have a file Home.page with code: <%@ Title="Contact List" %> <h1>Contact List</h1> <a href="<%= $this->Service->constructUrl('insert')%>">Create New Contact</a> <br/> <com:TForm> <com:TDropDownList ID="personInfo"> <com:TListItem Value="value 1" Text="item 1" /> <com:TListItem Value="value 2" Text="item 2" Selected="true" /> <com:TListItem Value="value 3" Text="item 3" /> <com:TListItem Value="value 4" Text="item 4" /> </com:TDropDownList> </com:TForm> & Home.php with code <?php class Home extends TPage { /** * Populates the datagrid with user lists. * This method is invoked by the framework when initializing the page * @param mixed event parameter */ public function onInit($param) { parent::onInit($param); // fetches all data account information $rec = ContactRecord::finder()->findAll(); } } ?> the $rec contain array of all values. Now I want to show all name in Dropdown list. I tried my best but fail. Can anyone help me? Thanks

    Read the article

  • JDBC call not executing

    - by dbyrne
    I am working on one of the DAOs for a medium sized web application. Unfortunately, it contains very convoluted logic, and makes hundreds of JDBC stored proc calls in loops. This is out of my control. I am working on a method inside the DAO which makes a single JDBC call. The simplified version of what this method looks like is this: DriverManager.registerDriver(new com.sybase.jdbc2.jdbc.SybDriver()); Connection con = DriverManager.getConnection((String)connectionDetails.get("DATABASE_URL") (String)connectionDetails.get("USERID"), (String)connectionDetails.get("PASSWORD")); String sqlToExecute = "{call " + STORED_PROC + "(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"; CallableStatement stmt = con.prepareCall(sqlToExecute); //Maybe I should try calling clearParameters here? stmt.setString(1,someData); //....Set of parameters.... if (!stmt.execute()) { //execute method never returns false } stmt.close(); Its pretty much a textbook JDBC call. All this stored proc does is insert a single row. Here is where things get crazy: This code works when you run it through a debugger line by line, but fails when you run it "full speed". Not only does it fail, but it doesn't throw any exception! The execute method always returns true. It just breezes right through the JDBC call without inserting a row to the database. If you go through the log files, copy the stored proc call and run it manually, it works (just like it does in debug mode). Whats strange is that the rest of the DAO, with all its hundreds of looped stored proc calls, works fine. My thinking is that Connection or CallableStatement is caching some value behind the scenes that is screwing things up. Has anyone ever seen anything like this before? A JDBC call failing with no exceptions? I know it will be impossible to provide a complete solution to this without seeing the whole application, I am just looking for suggestions on possible issues to investigate.

    Read the article

  • Git merge 2 new file with removed content and added content

    - by Loïc Faure-Lacroix
    So we are working in with 2 different repositories and both designers modified the same file. the problem is quite simple but I have no ideas how to solve it yet. Both files are marked as new since they have almost nothing in common except that file. When I try to merge from branch A to B it mark the parts added in A deleted in B and on the other side, what was added in B appears deleted in A. git seems to try to outsmart me when I know that I need almost every changes and nothing should be mark as deletion. I have 2 other branch that should merge without problem after these 2 branch. I can't merge them yet since there are some recent changes that may not merge really well too. I have to merge A and B = E then C and D = F and then hopefully E and F So the big question here is how can I do a completely manual merge that will mark every changes as conflict anything deleted anything added should be marked as conflict that I can solve by myself using an editor. Git is trying to outsmart me and fail terribly at it.

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Python CGI Premature end of script error depending on script parameters.

    - by nickengland
    I have a python script which should parse a file and produce some output to disk, as well as returning a webpage linking to the outputted files. When run with a file posted from the HTML form I get no HTML output back, just a 500 error page and the error_log contains the line: [Mon Apr 19 15:03:23 2010] [error] [client xxx.xxx.121.79] Premature end of script headers: uploadcml.py, referer: http://xxx.ch.cam.ac.uk:9000/ However, the files which the script should be saving are indeed saved to disk. If I run it without any arguments, the script returns the correct HTML indicating no file was parsed. All the information I have found on the web about Premature end of script headers implies it is due to either a missing header, or lack of permissions on the python script but neither can apply to me. The first lines of the script are: #!/home/nwe23/bin/bin/python import cgitb; cgitb.enable() import cgi import pybel,openbabel import random print "Content-Type: text/html" print so when run, I can see no way for it to fail to output the header, and it DOES output the header when run without a file to parse, but when given a file produces the error(but still parsed the file and saves the output to disk!). Does anyone know how this is happening and what can be done to fix it? I have tried adding wrongly-indented gibberish (such as foobar) at various points in the file, and this results in adding an indent error to the error_log wherever it is, even if its the very last line in the script. The Premature script headers error remains though. Does this mean the script is executing all the way through?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >