Search Results

Search found 24353 results on 975 pages for 'test coverage'.

Page 134/975 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Odd behavior when recursively building a return type for variadic functions

    - by Dennis Zickefoose
    This is probably going to be a really simple explanation, but I'm going to give as much backstory as possible in case I'm wrong. Advanced apologies for being so verbose. I'm using gcc4.5, and I realize the c++0x support is still somewhat experimental, but I'm going to act on the assumption that there's a non-bug related reason for the behavior I'm seeing. I'm experimenting with variadic function templates. The end goal was to build a cons-list out of std::pair. It wasn't meant to be a custom type, just a string of pair objects. The function that constructs the list would have to be in some way recursive, with the ultimate return value being dependent on the result of the recursive calls. As an added twist, successive parameters are added together before being inserted into the list. So if I pass [1, 2, 3, 4, 5, 6] the end result should be {1+2, {3+4, 5+6}}. My initial attempt was fairly naive. A function, Build, with two overloads. One took two identical parameters and simply returned their sum. The other took two parameters and a parameter pack. The return value was a pair consisting of the sum of the two set parameters, and the recursive call. In retrospect, this was obviously a flawed strategy, because the function isn't declared when I try to figure out its return type, so it has no choice but to resolve to the non-recursive version. That I understand. Where I got confused was the second iteration. I decided to make those functions static members of a template class. The function calls themselves are not parameterized, but instead the entire class is. My assumption was that when the recursive function attempts to generate its return type, it would instantiate a whole new version of the structure with its own static function, and everything would work itself out. The result was: "error: no matching function for call to BuildStruct<double, double, char, char>::Go(const char&, const char&)" The offending code: static auto Go(const Type& t0, const Type& t1, const Types&... rest) -> std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> My confusion comes from the fact that the parameters to BuildStruct should always be the same types as the arguments sent to BuildStruct::Go, but in the error code Go is missing the initial two double parameters. What am I missing here? If my initial assumption about how the static functions would be chosen was incorrect, why is it trying to call the wrong function rather than just not finding a function at all? It seems to just be mixing types willy-nilly, and I just can't come up with an explanation as to why. If I add additional parameters to the initial call, it always burrows down to that last step before failing, so presumably the recursion itself is at least partially working. This is in direct contrast to the initial attempt, which always failed to find a function call right away. Ultimately, I've gotten past the problem, with a fairly elegant solution that hardly resembles either of the first two attempts. So I know how to do what I want to do. I'm looking for an explanation for the failure I saw. Full code to follow since I'm sure my verbal description was insufficient. First some boilerplate, if you feel compelled to execute the code and see it for yourself. Then the initial attempt, which failed reasonably, then the second attempt, which did not. #include <iostream> using std::cout; using std::endl; #include <utility> template<typename T1, typename T2> std::ostream& operator <<(std::ostream& str, const std::pair<T1, T2>& p) { return str << "[" << p.first << ", " << p.second << "]"; } //Insert code here int main() { Execute(5, 6, 4.3, 2.2, 'c', 'd'); Execute(5, 6, 4.3, 2.2); Execute(5, 6); return 0; } Non-struct solution: template<typename Type> Type BuildFunction(const Type& t0, const Type& t1) { return t0 + t1; } template<typename Type, typename... Rest> auto BuildFunction(const Type& t0, const Type& t1, const Rest&... rest) -> std::pair<Type, decltype(BuildFunction(rest...))> { return std::pair<Type, decltype(BuildFunction(rest...))> (t0 + t1, BuildFunction(rest...)); } template<typename... Types> void Execute(const Types&... t) { cout << BuildFunction(t...) << endl; } Resulting errors: test.cpp: In function 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]': test.cpp:33:35: instantiated from here test.cpp:28:3: error: no matching function for call to 'BuildFunction(const int&, const int&, const double&, const double&, const char&, const char&)' Struct solution: template<typename... Types> struct BuildStruct; template<typename Type> struct BuildStruct<Type, Type> { static Type Go(const Type& t0, const Type& t1) { return t0 + t1; } }; template<typename Type, typename... Types> struct BuildStruct<Type, Type, Types...> { static auto Go(const Type& t0, const Type& t1, const Types&... rest) -> std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> { return std::pair<Type, decltype(BuildStruct<Types...>::Go(rest...))> (t0 + t1, BuildStruct<Types...>::Go(rest...)); } }; template<typename... Types> void Execute(const Types&... t) { cout << BuildStruct<Types...>::Go(t...) << endl; } Resulting errors: test.cpp: In instantiation of 'BuildStruct<int, int, double, double, char, char>': test.cpp:33:3: instantiated from 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]' test.cpp:38:41: instantiated from here test.cpp:24:15: error: no matching function for call to 'BuildStruct<double, double, char, char>::Go(const char&, const char&)' test.cpp:24:15: note: candidate is: static std::pair<Type, decltype (BuildStruct<Types ...>::Go(BuildStruct<Type, Type, Types ...>::Go::rest ...))> BuildStruct<Type, Type, Types ...>::Go(const Type&, const Type&, const Types& ...) [with Type = double, Types = {char, char}, decltype (BuildStruct<Types ...>::Go(BuildStruct<Type, Type, Types ...>::Go::rest ...)) = char] test.cpp: In function 'void Execute(const Types& ...) [with Types = {int, int, double, double, char, char}]': test.cpp:38:41: instantiated from here test.cpp:33:3: error: 'Go' is not a member of 'BuildStruct<int, int, double, double, char, char>'

    Read the article

  • jquery nested sortable list

    - by Y.G.J
    i have this code $(document).ready(function() { $("#test-list").sortable({ items: "> li", handle : '.handle', axis: 'y', opacity: 0.6, update : function () { var order = $('#test-list').sortable('serialize'); $("#info").load("process-sortable.asp?"+order+"&id=catid&order=orderid&table=tblCats"); } }); $("#test-sub").sortable({ containment: "ul", items: "li", handle : '.handle', axis: 'y', opacity: 0.6, update : function () { var order = $('#test-list').sortable('serialize'); $("#info").load("process-sortable.asp?"+order+"&id=catid&order=orderid&table=tblCats"); } }); }); for this kind of UL <ul id="test-list"> <li></li> <li> <ul id="test-sub"> <li></li> <li></li> <li></li> <li></li> <li></li> <li></li> </ul> </li> <li></li> <li></li> <li></li> <li></li> </ul> but it can be changed dynamiclly... when i drag and drop the main li it is working when i do it with the childs it will drag the main one what is wrong?

    Read the article

  • Why might changes be populated from one NSManagedObjectContext to another without an explicit merge?

    - by Mike Laurence
    I'm working on an object import feature that utilizes multiple threads/NSManagedObjectContexts, using http://www.mac-developer-network.com/columns/coredata/may2009/ as my guide (note that I am developing for iPhone). For some reason, when I save one of my contexts the other is immediately updated with the changes, even though I have commented out my calls to mergeChangesFromContextDidSaveNotification. Are there any reasons the contexts might be merging into one another without an explicit call? Here a log of what's going on: // 1.) Main context is saved with "Peter Gabriel" // 2.) Test context is created, begins with same contents as main context // 3.) Main context is inserted with "Spoon" // 4.) Test context is inserted with "Phoenix" // Contents at this point: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) // 5.) testContext is saved // New contents of contexts: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Phoenix", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) As you can see, the test context is saved midway through, and the main context suddenly has the new objects from the test context, even though I haven't performed the whole NSManagedObjectContextDidSaveNotification/mergeChangesFromContext combo. My understanding is that no changes will ever be merged unless done so explicitly... does anyone know what's going on here?

    Read the article

  • Authlogic and functional tests - Authlogic::Session::Activation::NotActivatedError: You must activat

    - by adam
    Im getting the errors below despite following the documentation. In test_helper.rb ENV["RAILS_ENV"] = "test" require File.expand_path(File.dirname(__FILE__) + "/../config/environment") require "authlogic/test_case" require 'test_help' require 'shoulda' require File.dirname(__FILE__) + "/factories" In my functional test require 'test_helper' class SentencesControllerTest < ActionController::TestCase setup do :activate_authlogic end context "logged in" do setup do @user = Factory(:user) UserSession.create(@user.id) end context "on GET to :new" do setup do get :new end should "present form with text field" do assert_select('form#new_sentence') do assert_select('textarea#sentence_text') end end end end #context logged in. end in environments.rb config.gem "authlogic" Im not sure why it isnt working. Can anyone help out on this? Authlogic::Session::Activation::NotActivatedError: You must activate the Authlogic::Session::Base.controller with a controller object before creating objects authlogic (2.1.3) lib/authlogic/session/activation.rb:47:in `initialize' authlogic (2.1.3) lib/authlogic/session/klass.rb:64:in `initialize' authlogic (2.1.3) lib/authlogic/session/scopes.rb:79:in `initialize' authlogic (2.1.3) lib/authlogic/session/existence.rb:29:in `new' authlogic (2.1.3) lib/authlogic/session/existence.rb:29:in `create' test/functional/sentences_controller_test.rb:11:in `__bind_1270172858_922804' shoulda (2.10.3) lib/shoulda/context.rb:380:in `call' shoulda (2.10.3) lib/shoulda/context.rb:380:in `run_current_setup_blocks' shoulda (2.10.3) lib/shoulda/context.rb:379:in `each' shoulda (2.10.3) lib/shoulda/context.rb:379:in `run_current_setup_blocks' shoulda (2.10.3) lib/shoulda/context.rb:371:in `run_all_setup_blocks' shoulda (2.10.3) lib/shoulda/context.rb:375:in `run_parent_setup_blocks' shoulda (2.10.3) lib/shoulda/context.rb:359:in `test: logged in on GET to :new should present form with text field. ' /opt/rubymine/rb/testing/patch/testunit/test/unit/ui/testrunnermediator.rb:36:in `run_suite' /opt/rubymine/rb/testing/patch/testunit/test/unit/ui/teamcity/testrunner.rb:215:in `start_mediator' /opt/rubymine/rb/testing/patch/testunit/test/unit/ui/teamcity/testrunner.rb:191:in `start'

    Read the article

  • How can I Fail a WebTest?

    - by craigb
    I'm using Microsoft WebTest and want to be able to do something similar to NUnit's Assert.Fail(). The best i have come up with is to throw new webTestException() but this shows in the test results as an Error rather than a Failure. Other than reflecting on the WebTest to set a private member variable to indicate the failure, is there something I've missed? EDIT: I have also used the Assert.Fail() method, but this still shows up as an error rather than a failure when used from within WebTest, and the Outcome property is read-only (has no public setter). EDIT: well now I'm really stumped. I used reflection to set the Outcome property to Failed but the test still passes! Here's the code that sets the Oucome to failed: public static class WebTestExtensions { public static void Fail(this WebTest test) { var method = test.GetType().GetMethod("set_Outcome", BindingFlags.NonPublic | BindingFlags.Instance); method.Invoke(test, new object[] {Outcome.Fail}); } } and here's the code that I'm trying to fail: public override IEnumerator<WebTestRequest> GetRequestEnumerator() { this.Fail(); yield return new WebTestRequest("http://google.com"); } Outcome is getting set to Oucome.Fail but apparently the WebTest framework doesn't really use this to determine test pass/fail results.

    Read the article

  • Rails 3 error: no such file to load -- initializer (LoadError)

    - by Bob
    I'm on Ubuntu and my app is written for Rails 2.3.5 and I got it to run on 2.3.10 but when I upgraded to Rails 3.0.3 and tried to run it using "ruby script/server", it throws this error. /usr/local/lib/site_ruby/1.8/rubygems.rb:230:in `activate': can't activate rails (= 2.3.10, runtime) for [], already activated rails-3.0.3 for [] (Gem::LoadError) from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:35:in `require' from /home/bob/savage/config/boot.rb:55:in `load_initializer' from /home/bob/savage/config/boot.rb:38:in `run' from /home/bob/savage/config/boot.rb:11:in `boot!' from /home/bob/savage/config/boot.rb:110 from script/server:2:in `require' from script/server:2 When I uninstalled Rails 2.3.10, it throws this error instead bob@ubuntu:~/test.2.3.10$ ruby script/server /usr/local/lib/site_ruby/1.8/rubygems.rb:777:in `report_activate_error': RubyGem version error: rails(3.0.3 not = 2.3.10) (Gem::LoadError) from /usr/local/lib/site_ruby/1.8/rubygems.rb:211:in `activate' from /usr/local/lib/site_ruby/1.8/rubygems.rb:1056:in `gem' from /home/bob/test.2.3.10/config/boot.rb:60:in `load_rails_gem' from /home/bob/test.2.3.10/config/boot.rb:54:in `load_initializer' from /home/bob/test.2.3.10/config/boot.rb:38:in `run' from /home/bob/test.2.3.10/config/boot.rb:11:in `boot!' from /home/bob/test.2.3.10/config/boot.rb:114 from script/server:2:in `require' from script/server:2 Ideas? Thanks in advance for your help.

    Read the article

  • Trouble converting an MP3 file to a WAV file using Naudio

    - by WebDevHobo
    Naudio Library: http://naudio.codeplex.com/ I'm trying to convert an MP3 file to a WAV file, but I've run in to a small error. I know what's going wrong, but I don't really know how to go about fixing it. Here's the piece of code I'm running: private void button1_Click(object sender, EventArgs e) { using(Mp3FileReader reader = new Mp3FileReader(@"path\to\MP3")) { using(WaveFileWriter writer = new WaveFileWriter(@"C:\test.wav", new WaveFormat())) { int counter = 0; while(reader.Read(test, counter, test.Length + counter) != 0) { writer.WriteData(test, counter, test.Length + counter); counter += 512; } } } } reader.Read() goes into the Mp3FileReader class, and the method looks like this: public override int Read(byte[] sampleBuffer, int offset, int numBytes) { if (numBytes % waveFormat.BlockAlign != 0) //throw new ApplicationException("Must read complete blocks"); numBytes -= (numBytes % waveFormat.BlockAlign); return mp3Stream.Read(sampleBuffer, offset, numBytes); } mp3Stream is an object of the Stream class. The problem is: I'm getting an ArgumentException. MSDN says that this is because the sum of offset and numBytes is greater than the length of sampleBuffer. Documentation: http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx This happens because I increase the counter every time, but the size of the byte array test remains the same. What I've been wondering is: do I need to increase the size of the array dynamically, or do I need to find out the needed size at the beginning and set it right away? And also, instead of 512, the method in Mp3FileReader returns 365 the first time. Which is the size of a whole block. But I'm writing the full 512. I'm basically just using the read to check if I'm not at the end of the file yet. Do I need to catch the return value and do something with that, or am I good here?

    Read the article

  • vbCrLf in Multiline TextBox shows up only when .Trim() is called

    - by Brandon Montgomery
    I have an ASP TextBox with TextMode set to MultiLine. I'm having problems with preserving the vbCrLf characters when a user tries to put line breaks into the text. When a button on the page is pressed, I'm taking the text from the control, trimming it using String.Trim, and assigning that value to a String property on an object (which, in turn assigns it to a private internal String variable on the object). The object then takes the value from the private internal variable and throws it into the database using a stored procedure call (the SP parameter it is put into is an nvarchar(4000)). ASPX Page: <asp:UpdatePanel ID="UpdatePanel2" runat="server" RenderMode="Inline" UpdateMode="Conditional" ChildrenAsTriggers="true"> <ContentTemplate> <!-- some other controls and things --> <asp:TextBox TextMode="MultiLine" runat="server" ID="txtComments" Width="100%" Height="60px" CssClass="TDTextArea" Style="border: 0px;" MaxLength="2000" /> <!-- some other controls and things --> </ContentTemplate> </asp:UpdatePanel> code behind: ProjectRequest.StatusComments = txtComments.Text.Trim object property: Protected mStatusComments As String = String.Empty Property StatusComments() As String Get Return mStatusComments.Trim End Get Set(ByVal Value As String) mStatusComments = Value End Set End Property stored proc call: Common.RunSP(mDBConnStr, "ProjectStatusUpdate", _ Common.MP("@UID", SqlDbType.NVarChar, 40, mUID), _ Common.MP("@ProjID", SqlDbType.VarChar, 40, mID), _ Common.MP("@StatusID", SqlDbType.Int, 8, mStatusID), _ Common.MP("@Comments", SqlDbType.NVarChar, 4000, mStatusComments), _ Common.MP("@PCTComp", SqlDbType.Int, 4, 0), _ Common.MP("@Type", Common.TDSqlDbType.TinyInt, 1, EntryType)) Here's the strangest part. When I debug the code, if I type "test test" (without the quotes) into the comments text box, then click the save button and use the immediate window to view the variable values as I step through, here is what I get: ?txtComments.Text "test test" ?txtComments.Text.Trim "test test" ?txtComments.Text(4) " "c ?txtComments.Text.Trim()(4) " "c Anyone have a clue as to what's going on here?

    Read the article

  • Taking "do the simplest thing that could possible work" too far in TDD: testing for a file-name kno

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything nor throw anything processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • PyQt and unittest - how to handle signals and slots

    - by Einar
    Hello, some small application I'm developing uses a module I have written to check certain web services via a REST API. I've been trying to add unit tests to it so I don't break stuff, and I stumbled upon a problem. I use a lot of signal-slot connections to perform operations asynchronously. For example a typical test would be (pseudo-Python), with postDataDownloaded as a signal: def testConnection(self): "Test connection and posts retrieved" def length_test(): self.assertEqual(len(self.client.post_data), 5) self.client.postDataReady.connect(length_test) self.client.get_post_list(limit=5) Now, unittest will report this test to be "ok" when running, regardless of the result (as another slot is being called), even if asserts fail (I will get an unhandled AssertionError). Example when deliberatiely making the test fail: Test connection and posts retrieved ... ok [... more tests...] OK Traceback (most recent call last): [...] AssertionError: 4 != 5 The slot inside the test is merely an experiment: I get the same results if it's outside (instance method). I also have to add that the various methods I'm calling all make HTTP requests, which means they take a bit of time (I need to mock the request - in the mean time I'm using SimpleHTTPServer to fake the connections and give them proper data). Is there a way around this problem?

    Read the article

  • EXC_BAD_ACCESS NSUrlConnection

    - by Lars
    Hi all, i got an EXC_BAD_ACCESS when i perform the last line of the function (webData). -(void)requestSoap{ NSString *requestUrl = @"http://www.website.com/webservice.php"; NSString *soapMessage = @"the soap message"; //website and soapmessage are valid in original code. NSError **error; NSURLResponse *response; //Convert parameter string to url NSURL *url = [NSURL URLWithString:requestUrl]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; //Create an XML message for webservice [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSData *webData = [NSURLConnection sendSynchronousRequest:theRequest returningResponse:&response error:error]; } I tried not to release a thing, because what i read on the net is it's almost always a memory thing. When i debug the code (NSZombieEnabled = YES) this is what i get: [Session started at 2010-05-31 15:56:13 +0200.] GNU gdb 6.3.50-20050815 (Apple version gdb-1461.2) (Fri Mar 5 04:43:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 19856. test(19856) malloc: recording malloc stacks to disk using standard recorder test(19856) malloc: enabling scribbling to detect mods to free blocks test(19856) malloc: process 19832 no longer exists, stack logs deleted from /tmp/stack-logs.19832.test.w9Ek4L.index test(19856) malloc: stack logs being written into /tmp/stack-logs.19856.test.URRpQF.index Program received signal: “EXC_BAD_ACCESS”. Does anybody have a clue?? Thanks a lot! Lars

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • Group testNG tests without annotations

    - by diy
    Hi gents and ladies, I'm responsible for allowing unit tests for one of ETL components.I want to acomplish this using testNG with generic java test class and number of test definitions in testng.xmlpassing various parameters to the class.Oracle and ETL guys should be able to add new tests without changing the java code, so we need to use xml suite file instead of annotations. Question Is there a way to group tests in testng.xml?(similarly to how it is done with annotations) I mean something like <group name="first_group"> <test> <class ...> <parameter ...> </test> </group> <group name="second_group"> <test> <class ...> <parameter ...> </test> </group> I've checked the testng.dtd as figured out that similar syntax is not allowed.But is therea workaround to allow grouping? Thanks in advance

    Read the article

  • C++ Swig Python (Embedded Python in C++) works in Release but not in Debug

    - by sambha
    Platform: Windows 7, 64 bit (x64), Visual Studio 2008 I chose Python & Swig binding as the scripting environment of the application. As a prototype, created a simple VS solution with main() which initializes Python (Py_Initalize, Py_setPyHome, etc) & executes test.py. In the same solution created another project which is a DLL of a simple class. Used SWIG to wrap this class. This DLL is the _MyClasses.pyd. test.py creates the objects of my class & calls its member functions. All this works like a charm in the Release mode. But does not work in Debug mode (even tried banging my head on the laptop ;-) ). Output of my work looks like this (in both release & debug): x64 -debug - _MyClasses.pyd - MyClasses.py - test.exe - test.py - python26.dll - python26_d.dll Note that the debug version is linked against python26_d.lib. Had to build python myself for this! test.py import MyClasses print "ello" m = MyClasses.Male("John Doe", 25) print m.getType() Male is the C++ class. The problem: Traceback (most recent call last): File "test.py", line 6, in <module> import MyClasses File "...\x64\Debug\MyClasses.py", line 25, in <module> _MyClasses = swig_import_helper() File "...\x64\Debug\MyClasses.py", line 17, in swig_imp ort_helper import _MyClasses ImportError: No module named _MyClasses [15454 refs] I am used to Makefiles & am new to Visual Studio. I dont know who the culprit is here: Swig, The debug build of Python, Visual Studio, my stupidity. Thank you in advance. It will be a great help.

    Read the article

  • vlad the deployer: why do I need a scm folder?

    - by egarcia
    I'm learning to use vlad the deployer and I've got a question. Since I'm still learning I don't know what is pertinent to the question and what isn't, so please bear with me if I'm a little verbose. I've got 2 environments for a new application (test and production) besides my development machine. I've figured out this way to do the initial setup in my vlad.rake: namespace :test task :set set :domain, 'test.myserver.com' end end namespace :production task :set set :domain, 'www.myserver.com' end end This way I can have environment-specific stuff inside the namespaces, and still have shared tasks. For example, this would be the initial setup for test: rake vlad:test:set vlad:setup vlad:update This creates the following folders on my test server: releases/ scm/ shared/ current -> symlink to last release (inside the releases folder) My question is: what's the point of the scm folder? Every time I do vlad:update, the following happens: svn checkout on the scm/ folder above svn export on the /releases/{date} folder update current symlink So scm is a copy of my repository... but then there's an "export" copy of the repository on /releases/{date}. And that is the one used by the application... scm doesn't seem to be used by anyone? Wouldn't I be just fine without the scm folder?

    Read the article

  • Debugging MinGW program with gdb on Windows, not terminating at assert failure

    - by devil
    How do I set up gdb on window so that it does not allow a program with assertion failure to terminate? I intend to check the stack trace and variables in the program. For example, running this test.cpp program compiled with MinGW 'g++ -g test.cpp -o test' in gdb: #include <cassert> int main(int argc, char ** argv) { assert(1==2); return 0; } Gives: $ gdb test.exe GNU gdb 6.8 Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i686-pc-mingw32"... (gdb) r Starting program: f:\code/test.exe [New thread 4616.0x1200] Error: dll starting at 0x77030000 not found. Error: dll starting at 0x75f80000 not found. Error: dll starting at 0x77030000 not found. Error: dll starting at 0x76f30000 not found. Assertion failed: 1==2, file test.cpp, line 2 This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Program exited with code 03. (gdb) I would like to be able to stop the program from terminating immediately, like how Visual Studio's debugger and gdb on Linux does it. I have done a search and found some stuff on trapping signals but I can't seem to find a good post on how to set up gdb to do this.

    Read the article

  • How do I get syncdb db_table and app_label to play nicely together

    - by Chris Heisel
    I've got a model that looks something like this: class HeiselFoo(models.Model): title = models.CharField(max_length=250) class Meta: """ Meta """ app_label = "Foos" db_table = u"medley_heiselfoo_heiselfoo" And whenever I run my test suite, I get an error because Django isn't creating the tables for that model. It appears to be an interaction between app_label and db_table -- as the test suite runs normally if db_table is set, but app_label isn't. Here's a link to the full source code: http://github.com/cmheisel/heiselfoo Here's the traceback from the test suite: E ====================================================================== ERROR: test_truth (heiselfoo.tests.HeiselFooTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/heiselfoo/tests.py", line 10, in test_truth f.save() File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/base.py", line 434, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/base.py", line 527, in save_base result = manager._insert(values, return_id=update_pk, using=using) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/manager.py", line 195, in _insert return insert_query(self.model, values, **kwargs) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/query.py", line 1479, in insert_query return query.get_compiler(using=using).execute_sql(return_id) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 783, in execute_sql cursor = super(SQLInsertCompiler, self).execute_sql(None) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 727, in execute_sql cursor.execute(sql, params) File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 200, in execute return Database.Cursor.execute(self, query, params) DatabaseError: no such table: medley_heiselfoo_heiselfoo ---------------------------------------------------------------------- Ran 1 test in 0.004s FAILED (errors=1) Creating test database 'default'... No fixtures found. medley_heiselfoo_heiselfoo Destroying test database 'default'...

    Read the article

  • Encode and Decode using UTF-8 in iphone

    - by Ekra
    Hi friends, I wanted an example were in I can encode and then decode the same string using UTF-8. Encode and then Decode means I want to implement the method in 2 area were one can encode it and other is able to decode it. I have seen the API but I didnt get much success:- StringWithCString:encoding: stringWithUTF8String: stringWithCString:(const char *)cString encoding:(NSStringEncoding)enc; =================EDITED================= I have string as "øæ-test-2.txt" . when I am encoding it char *s = "øæ-test-2.txt"; NSString *enc = [NSString stringWithCString:s encoding:NSASCIIStringEncoding]; I am getting "øæ-test-2.txt" as output. Now I want to get back the original string back i.e. "øæ-test-2.txt" +++++++++EDITED+++++++++++++++++++ I am getting "øæ-test-2.txt" from server and I need "øæ-test-2.txt" by decoding it . I am able to get the output from the link below http://www.cafewebmaster.com/online_tools/utf_decode Please try to use the link and u will understand my concern. I need the solution on urgent basis. It would be highly appreciated if anyone can give some hint or tutorial in right direction. Regards

    Read the article

  • Basic jUnit Questions

    - by Epitaph
    I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String) `public String multiply(String num1, String num2); I have done the implementation and created a test class with the following test cases involving the input String parameter as 1) valid numbers 2) characters 3) special symbol 4) empty string 5) Null value 6) 0 7) Negative number 8) float 9) Boundary values 10) Numbers that are valid but their product is out of range 11) numbers will + sign (+23) 1) I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ? 2) If testing an input value like character ("a"), do I need to include test cases for ALL scenarios? "a" as the first argument "a" as the second argument "a" and "b" as the 2 arguments 3) As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit? 4) Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough? 5) Following from the above point, have I successfully tested the multiply() method?

    Read the article

  • Exporting static data in a DLL

    - by Gayan
    I have a DLL which contains a class with static members. I use __declspec(dllexport) in order to make use of this class's methods. But when I link it to another project and try to compile it, I get "unresolved external symbol" errors for the static data. e.g. In DLL, Test.h class __declspec(dllexport) Test{ protected: static int d; public: static void m(){} } In DLL, Test.cpp #include "Test.h" int Test::d; In the application which uses Test, I call m(). I also tried using __declspec(dllexport) for each method separately but I still get the same link errors for the static members. If I check the DLL (the .lib) using dumpbin, I could see that the symbols have been exported. For instance, the app gives the following error at link time: 1>Main.obj : error LNK2001: unresolved external symbol "protected: static int CalcEngine::i_MatrixRow" (?i_MatrixRow@CalcEngine@@1HA) But the dumpbin of the .lib contains: Version : 0 Machine : 14C (x86) TimeDateStamp: 4BA3611A Fri Mar 19 17:03:46 2010 SizeOfData : 0000002C DLL name : CalcEngine.dll Symbol name : ?i_MatrixRow@CalcEngine@@1HA (protected: static int CalcEngine::i_MatrixRow) Type : data Name type : name Hint : 31 Name : ?i_MatrixRow@CalcEngine@@1HA I can't figure out how to solve this. What am I doing wrong? How can I get over these errors? P.S. The code was originally developed for Linux and the .so/binary combination works without a problem

    Read the article

  • Ruby send mail with smtp

    - by songdogtech
    I'm trying to send simple email via Ruby (no rails) on OS X, with XCode (which installs Ruby.) But I'm running into a problem with my smtp server which requires the email client to check mail before sending as a form of authentication. So with the script below I get an error: 500 Unrecognized command (Net::SMTPAuthenticationError). How can I get Ruby to authenticate with the smtp server in a "POP" fashion before I can send mail? Not download mail; I only want to send, but I have to check mail before I send. POP3 is not available at the smtp server. And I want to not have to install any other Ruby pieces and stay with using net/smtp, if at all possible. require 'net/smtp' message = <<MESSAGE_END From: A Test Sender <[email protected]> To: A Test User <[email protected]> Subject: e-mail test This is a test e-mail message. MESSAGE_END Net::SMTP.start('mail.domain.com', 25, 'localhost', '[email protected]', 'password', :plain)

    Read the article

  • Google App Engine: Unit testing concurrent access to memcache

    - by Phuong Nguyen de ManCity fan
    Would you guys show me a way to simulating concurrent access to memcache on Google App Engine? I'm trying with LocalServiceTestHelpers and threads but don't have any luck. Every time I try to access Memcache within a thread, then I get this error: ApiProxy$CallNotFoundException: The API package 'memcache' or call 'Increment()' was not found I guess that the testing library of GAE SDK tried to mimic the real environment and thus setup the environment for only one thread (the thread that running the test) which cannot be seen by other thread. Here is a piece of code that can reproduce the problem package org.seamoo.cache.memcacheImpl; import org.testng.Assert; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import com.google.appengine.api.memcache.MemcacheService; import com.google.appengine.api.memcache.MemcacheServiceFactory; import com.google.appengine.tools.development.testing.LocalMemcacheServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemcacheTest { LocalServiceTestHelper helper; public MemcacheTest() { LocalMemcacheServiceTestConfig memcacheConfig = new LocalMemcacheServiceTestConfig(); helper = new LocalServiceTestHelper(memcacheConfig); } /** * */ @BeforeMethod public void setUp() { helper.setUp(); } /** * @see LocalServiceTest#tearDown() */ @AfterMethod public void tearDown() { helper.tearDown(); } @Test public void memcacheConcurrentAccess() throws InterruptedException { final MemcacheService service = MemcacheServiceFactory.getMemcacheService(); Runnable runner = new Runnable() { @Override public void run() { // TODO Auto-generated method stub service.increment("test-key", 1L, 1L); try { Thread.sleep(200L); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } service.increment("test-key", 1L, 1L); } }; Thread t1 = new Thread(runner); Thread t2 = new Thread(runner); t1.start(); t2.start(); while (t1.isAlive()) { Thread.sleep(100L); } Assert.assertEquals((Long) (service.get("test-key")), new Long(4L)); } }

    Read the article

  • Vim with Powershell

    - by Kevin Berridge
    I'm using gvim on Windows. In my _vimrc I've added: set shell=powershell.exe set shellcmdflag=-c set shellpipe=> set shellredir=> function! Test() echo system("dir -name") endfunction command! -nargs=0 Test :call Test() If I execute this function (:Test) I see nonsense characters (non number/letter ASCII characters). If I use cmd as the shell, it works (without the -name), so the problem seems to be with getting output from powershell into vim. Interestingly, this works great: :!dir -name As does this: :r !dir -name UPDATE: confirming behavior mentioned by David If you execute the set commands mentioned above in the _vimrc, :Test outputs nonsense. However, if you execute them directly in vim instead of in the _vimrc, :Test works as expected. Also, I've tried using iconv in case it was an encoding problem: :echo iconv( system("dir -name"), "unicode", &enc ) But this didn't make any difference. I could be using the wrong encoding types though. Anyone know how to make this work?

    Read the article

  • Problem calling Java from PHP script

    - by Jack
    I am working on windows. I am running PHP (5.1.3) scripts on Tomcat using PHP/Java bridge. Here is my simple code //test.php <?php require_once("java\Java.inc"); $systemInfo = new Java("Test"); print $systemInfo->foo(); ?> Test.class is in the same folder as test.php. But the php file is not able to locate the test class and I get the following error - Fatal error: Uncaught [[o:Exception]:"java.lang.Exception: CreateInstance failed: new Test. If I use a standard class like below. It works - <?php require_once("java\Java.inc"); $systemInfo = new Java("java.lang.System"); print "Total seconds since January 1, 1970: ".$systemInfo->currentTimeMillis(); ?> What should I do? 1)Should I copy my class to the standard location where all Java classes are kept. (What is this location?) 2) Do some changes in the php.ini file

    Read the article

  • Tell LINQ Distinct which item to return

    - by Jon
    I understand how to do a Distinct() on a IEnumerable and that I have to create an IEqualityComparer for more advanced stuff however is there a way in which you can tell which duplicated item to return? For example say you have a List<T> List<MyClass> test = new List<MyClass>(); test.Add(new MyClass {ID = 1, InnerID = 4}); test.Add(new MyClass {ID = 2, InnerID = 4}); test.Add(new MyClass {ID = 3, InnerID = 14}); test.Add(new MyClass {ID = 4, InnerID = 14}); You then do: var distinctItems = test.Distinct(new DistinctItemComparer()); class DistinctItemComparer : IEqualityComparer<MyClass> { public bool Equals(MyClass x, MyClass y) { return x.InnerID == y.InnerID;; } public int GetHashCode(MyClassobj) { return obj.InnerID.GetHasCode(); } } This code will return the classes with ID 1 and 3. Is there a way to return the ID matches 2 & 4.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >