Search Results

Search found 23949 results on 958 pages for 'test test'.

Page 130/958 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • strict aliasing and alignment

    - by cooky451
    I need a safe way to alias between arbitrary POD types, conforming to ISO-C++11 explicitly considering 3.10/10 and 3.11 of n3242 or later. There are a lot of questions about strict aliasing here, most of them regarding C and not C++. I found a "solution" for C which uses unions, probably using this section union type that includes one of the aforementioned types among its elements or nonstatic data members From that I built this. #include <iostream> template <typename T, typename U> T& access_as(U* p) { union dummy_union { U dummy; T destination; }; dummy_union* u = (dummy_union*)p; return u->destination; } struct test { short s; int i; }; int main() { int buf[2]; static_assert(sizeof(buf) >= sizeof(double), ""); static_assert(sizeof(buf) >= sizeof(test), ""); access_as<double>(buf) = 42.1337; std::cout << access_as<double>(buf) << '\n'; access_as<test>(buf).s = 42; access_as<test>(buf).i = 1234; std::cout << access_as<test>(buf).s << '\n'; std::cout << access_as<test>(buf).i << '\n'; } My question is, just to be sure, is this program legal according to the standard?* It doesn't give any warnings whatsoever and works fine when compiling with MinGW/GCC 4.6.2 using: g++ -std=c++0x -Wall -Wextra -O3 -fstrict-aliasing -o alias.exe alias.cpp * Edit: And if not, how could one modify this to be legal?

    Read the article

  • explicit template instantiations

    - by user323422
    see following code and please clear doubts1. as ABC is template why it not showing error when we put defination of ABC class member function in test.cpp 2.if i put test.cpp code in test.h and remve 2 , then it working fine. // test.h template <typename T> class ABC { public: void foo( T& ); void bar( T& ); }; // test.cpp template <typename T> void ABC<T>::foo( T& ) {} // definition template <typename T> void ABC<T>::bar( T& ) {} // definition template void ABC<char>::foo( char & ); // 1 template class ABC<char>; // 2 // main.cpp #include "test.h" int main() { ABC<char> a; a.foo(); // valid with 1 or 2 a.bar(); // link error if only 1, valid with 2 }

    Read the article

  • Binder and variadic template ends up in a segmentation fault

    - by phlipsy
    I wrote the following program #include <iostream> template<typename C, typename Res, typename... Args> class bind_class_t { private: Res (C::*f)(Args...); C *c; public: bind_class_t(Res (C::*f)(Args...), C* c) : f(f), c(c) { } Res operator() (Args... args) { return (c->*f)(args...); } }; template<typename C, typename Res, typename... Args> bind_class_t<C, Res, Args...> bind_class(Res (C::*f)(Args...), C* c) { return bind_class<C, Res, Args...>(f, c); } class test { public: int add(int x, int y) { return x + y; } }; int main() { test t; // bind_class_t<test, int, int, int> b(&test::add, &t); bind_class_t<test, int, int, int> b = bind_class(&test::add, &t); std::cout << b(1, 2) << std::endl; return 0; } compiled it with gcc 4.3.3 and got a segmentation fault. After spending some time with gdb and this program it seems to me that the addresses of the function and the class are mixed up and a call of the data address of the class isn't allowed. Moreover if I use the commented line instead everything works fine. Can anyone else reproduce this behavior and/or explain me what's going wrong here?

    Read the article

  • passing values between forms (winforms)

    - by dnkira
    Hello. Vierd behaviar when passing values to and from 2nd form. ParameterForm pf = new ParameterForm(testString); works ParameterForm pf = new ParameterForm(); pf.testString="test"; doesn't (testString defined as public string) maybe i'm missing something? anyway i'd like to make 2nd variant work properly, as for now - it returns null object reference error. thanks for help. posting more code here: calling Button ParametersButton = new Button(); ParametersButton.Click += delegate { ParameterForm pf = new ParameterForm(doc.GetElementById(ParametersButton.Tag.ToString())); pf.ShowDialog(this); pf.test = "test"; pf.Submit += new ParameterForm.ParameterSubmitResult(pf_Submit); }; definition and use public partial class ParameterForm : Form { public string test; public XmlElement node; public delegate void ParameterSubmitResult(object sender, XmlElement e); public event ParameterSubmitResult Submit; public void SubmitButton_Click(object sender, EventArgs e) { Submit(this,this.node); Debug.WriteLine(test); } } result: Submit - null object reference test - null object reference

    Read the article

  • TimeoutException when WCF Host and Client are in the same process

    - by Pharao2k
    I've ran into a really weird problem. I am building a heavily distributed application where each app instance can either be a Host and/or Client to a WCF-Service (very p2p-like). Everything works fine, as long as the Client and the targeted Host (By which I mean the app, not the Host, since currently everything runs on a single computer (so no Firewall problems etc.)) are NOT the same. IF they are the same, then the app hangs for exactly 1 Minute and then throws a TimeoutException. WCF-Logging did not produce anything helpful. Here is a small app which demonstrates the Problem: public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void button1_Click(object sender, RoutedEventArgs e) { var binding = new NetTcpBinding(); var baseAddress = new Uri(@"net.tcp://localhost:4000/Test"); ServiceHost host = new ServiceHost(typeof(TestService), baseAddress); host.AddServiceEndpoint(typeof(ITestService), binding, baseAddress); var debug = host.Description.Behaviors.Find<ServiceDebugBehavior>(); if (debug == null) host.Description.Behaviors.Add(new ServiceDebugBehavior { IncludeExceptionDetailInFaults = true }); else debug.IncludeExceptionDetailInFaults = true; host.Open(); var clientBinding = new NetTcpBinding(); var testProxy = new TestProxy(clientBinding, new EndpointAddress(baseAddress)); testProxy.Test(); } } [ServiceContract] public interface ITestService { [OperationContract] void Test(); } public class TestService : ITestService { public void Test() { MessageBox.Show("foo"); } } public class TestProxy : ClientBase<ITestService>, ITestService { public TestProxy(NetTcpBinding binding, EndpointAddress remoteAddress) : base(binding, remoteAddress) { } public void Test() { Channel.Test(); } } What am I doing wrong? Regards, Pharao2k

    Read the article

  • Strings exported from a module have changed line breaks

    - by Jesse Millikan
    In a DrScheme project, I'm using a MrEd editor-canvas% with text% and inserting a string from a literal in a Scheme file. This results in an extra blank line in the editor for each line of text I'm trying to insert. I've tracked this down to the apparent fact that string literals from outside modules are getting extra line breaks. Here's a full example. The editor is irrelevant at this point, but it displays the result. ; test-literals.ss (module test-literals scheme (provide (all-defined-out)) (define exported-string "From another module with some more line breaks. ")) ; editor-test.ss (module editor-test scheme (require mred "test-literals.ss") (define w (instantiate frame% ("Editor Test" #f) )) (define c (instantiate editor-canvas% (w) (line-count 12) (min-width 400))) (define editor (instantiate text% ())) (send c set-editor editor) (send w show #t) (send editor erase) (send editor insert "Some text with some line breaks. ") (send editor insert exported-string)) And the result in the editor is Some text with some line breaks. From another module with some more line breaks. I've traced in and figured out that it's changing Unix line breaks to Windows line breaks when strings are imported from another module, but these display as double line breaks. Why is this happening and is there a way to stop it other than changing every imported string?

    Read the article

  • Multiple responses from identical calls in asynch QUnit + Mockjax tests

    - by NickL
    I'm trying to test some jQuery ajax code using QUnit and Mockjax and have it return different JSON for different tests, like this: $(document).ready(function() { function functionToTest() { return $.getJSON('/echo/json/', { json: JSON.stringify({ "won't": "run" }) }); } module("first"); test("first test", function() { stop(); $.mockjax({ url: '/echo/json/', responseText: JSON.stringify({ hello: 'HEYO!' }) }); functionToTest().done(function(json) { ok(true, json.hello); start(); }); }); test("second test", function() { stop(); $.mockjax({ url: '/echo/json/', responseText: JSON.stringify({ hello: 'HELL NO!' }) }); functionToTest().done(function(json) { ok(true, json.hello); start(); }); }); }); Unfortunately it returns the same response for each call, and order can't be guaranteed, so was wondering how I could set it up so that it was coupled to the actual request and came up with this: $.mockjax({ url: '/echo/json/', response: function(settings) { if (JSON.parse(settings.data.json).order === 1) { this.responseText = JSON.stringify({ hello: 'HEYO!' }); } else { this.responseText = JSON.stringify({ hello: 'HELL NO!' }); } } }); This relies on parameters being sent to the server, but what about requests without parameters, where I still need to test different responses? Is there a way to use QUnit's setup/teardown to do this?

    Read the article

  • Temporary non-const istream reference in constructor (C++)

    - by Christopher Bruns
    It seems that a constructor that takes a non-const reference to an istream cannot be constructed with a temporary value in C++. #include <iostream> #include <sstream> using namespace std; class Bar { public: explicit Bar(std::istream& is) {} }; int main() { istringstream stream1("bar1"); Bar bar1(stream1); // OK on all platforms // compile error on linux, Mac gcc; OK on Windows MSVC Bar bar2(istringstream("bar2")); return 0; } This compiles fine with MSVC, but not with gcc. Using gcc I get a compile error: g++ test.cpp -o test test.cpp: In function ‘int main()’: test.cpp:18: error: no matching function for call to ‘Bar::Bar(std::istringstream)’ test.cpp:9: note: candidates are: Bar::Bar(std::istream&) test.cpp:7: note: Bar::Bar(const Bar&) Is there something philosophically wrong with the second way (bar2) of constructing a Bar object? It looks nicer to me, and does not require that stream1 variable that is only needed for a moment.

    Read the article

  • Deleting a resource in a Cucumber (Capybara) step doesn't work

    - by Josiah Kiehl
    Here is my Scenario: Scenario: Delete a match Given pojo is logged in And there is a match with the following: | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | And I am on the show match 1 page And show me the page When I follow "Delete" And I follow "Yes, delete it" Then there should not be a match with the following: | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | If I walk through these steps manually, they work. When I click the confirmation: Yes, delete it, then the match is deleted. Cucumber, however, fails to delete the record and the last step fails. And I follow "Yes, delete it" # features/step_definitions/web_steps.rb:32 Then there should not be a match with the following: # features/step_definitions/match_steps.rb:8 | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | <nil> expected but was <#<Match id: 1, name: "Game del Pojo", date_and_time: "2010-02-23 17:52:00", teams: 2, created_at: "2010-03-02 23:06:33", updated_at: "2010-03-02 23:06:33", comment: "This is an awesome comment", players: 2, game_id: 1, user_id: 1>>. (Test::Unit::AssertionFailedError) /usr/lib/ruby/1.8/test/unit/assertions.rb:48:in `assert_block' /usr/lib/ruby/1.8/test/unit/assertions.rb:495:in `_wrap_assertion' /usr/lib/ruby/1.8/test/unit/assertions.rb:46:in `assert_block' /usr/lib/ruby/1.8/test/unit/assertions.rb:83:in `assert_equal' /usr/lib/ruby/1.8/test/unit/assertions.rb:172:in `assert_nil' ./features/step_definitions/match_steps.rb:22:in `/^there should (not)? be a match with the following:$/' features/matches.feature:124:in `Then there should not be a match with the following:' Any clue how to debug this? Thanks!

    Read the article

  • Rhino Mocks Partial Mock

    - by dotnet crazy kid
    I am trying to test the logic from some existing classes. It is not possible to re-factor the classes at present as they are very complex and in production. What I want to do is create a mock object and test a method that internally calls another method that is very hard to mock. So I want to just set a behaviour for the secondary method call. But when I setup the behaviour for the method, the code of the method is invoked and fails. Am I missing something or is this just not possible to test without re-factoring the class? I have tried all the different mock types (Strick,Stub,Dynamic,Partial ect.) but they all end up calling the method when I try to set up the behaviour. using System; using MbUnit.Framework; using Rhino.Mocks; namespace MMBusinessObjects.Tests { [TestFixture] public class PartialMockExampleFixture { [Test] public void Simple_Partial_Mock_Test() { const string param = "anything"; //setup mocks MockRepository mocks = new MockRepository(); var mockTestClass = mocks.StrictMock<TestClass>(); //record beahviour *** actualy call into the real method stub *** Expect.Call(mockTestClass.MethodToMock(param)).Return(true); //never get to here mocks.ReplayAll(); //this is what i want to test Assert.IsTrue(mockTestClass.MethodIWantToTest(param)); } public class TestClass { public bool MethodToMock(string param) { //some logic that is very hard to mock throw new NotImplementedException(); } public bool MethodIWantToTest(string param) { //this method calls the if( MethodToMock(param) ) { //some logic i want to test } return true; } } } }

    Read the article

  • GAE datastore querying integer fields

    - by ParanoidAndroid
    I notice strange behavior when querying the GAE datastore. Under certain circumstances Filter does not work for integer fields. The following java code reproduces the problem: log.info("start experiment"); DatastoreService datastore = DatastoreServiceFactory.getDatastoreService(); int val = 777; // create and store the first entity. Entity testEntity1 = new Entity(KeyFactory.createKey("Test", "entity1")); Object value = new Integer(val); testEntity1.setProperty("field", value); datastore.put(testEntity1); // create the second entity by using BeanUtils. Test test2 = new Test(); // just a regular bean with an int field test2.setField(val); Entity testEntity2 = new Entity(KeyFactory.createKey("Test", "entity2")); Map<String, Object> description = BeanUtilsBean.getInstance().describe(test2); for(Entry<String,Object> entry:description.entrySet()){ testEntity2.setProperty(entry.getKey(), entry.getValue()); } datastore.put(testEntity2); // now try to retrieve the entities from the database... Filter equalFilter = new FilterPredicate("field", FilterOperator.EQUAL, val); Query q = new Query("Test").setFilter(equalFilter); Iterator<Entity> iter = datastore.prepare(q).asIterator(); while (iter.hasNext()) { log.info("found entity: " + iter.next().getKey()); } log.info("experiment finished"); the log looks like this: INFO: start experiment INFO: found entity: Test("entity1") INFO: experiment finished For some reason it only finds the first entity even though both entities are actually stored in the datastore and both 'field' values are 777 (I see it in the Datastore Viewer)! Why does it matter how the entity is created? I would like to use BeanUtils, because it is convenient. The same problem occurs on the local devserver and when deployed to GAE.

    Read the article

  • More efficient approach to XSLT for-each

    - by Paul
    I have an XSLT which takes a . delimted string and splits it into two fields for a SQL statement: <xsl:for-each select="tokenize(Path,'\.')"> <xsl:choose> <xsl:when test="position() = 1 and position() = last()">SITE = '<xsl:value-of select="."/>' AND PATH = ''</xsl:when> <xsl:when test="position() = 1 and position() != last()">SITE = '<xsl:value-of select="."/>' </xsl:when> <xsl:when test="position() = 2 and position() = last()">AND PATH = '<xsl:value-of select="."/>' </xsl:when> <xsl:when test="position() = 2">AND PATH = '<xsl:value-of select="."/></xsl:when> <xsl:when test="position() > 2 and position() != last()">.<xsl:value-of select="."/></xsl:when> <xsl:when test="position() > 2 and position() = last()">.<xsl:value-of select="."/>' </xsl:when> <xsl:otherwise>zxyarglfaux</xsl:otherwise> </xsl:choose> </xsl:for-each> The results are as follows: INPUT: North OUTPUT: SITE = 'North' AND PATH = '' INPUT: North.A OUTPUT: SITE = 'North' AND PATH = 'A' INPUT: North.A.B OUTPUT: SITE = 'North' AND PATH = 'A.B' INPUT: North.A.B.C OUTPUT: SITE = 'North' AND PATH = 'A.B.C' This works, but is very lengthy. Can anyone see a more efficient approach? Thanks!

    Read the article

  • Why qry.post executed with asynchronous mode?

    - by Ryan
    Recently I met a strange problem, see code snips as below: var sqlCommand: string; connection: TADOConnection; qry: TADOQuery; begin connection := TADOConnection.Create(nil); try connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False'; connection.Open(); qry := TADOQuery.Create(nil); try qry.Connection := connection; qry.SQL.Text := 'Select * from aaa'; qry.Open; qry.Append; qry.FieldByName('TestField1').AsString := 'test'; qry.Post; beep; finally qry.Free; end; finally connection.Free; end; end; First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1. We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed. who can explain this problem , it troubles me so long time. thanks !!! BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7

    Read the article

  • Running RSpec Files From ruby code

    - by Brian D.
    I'm trying to run RSpec tests straight from ruby code. More specifically, I'm running some mysql scripts, loading the rails test environment and then I want to run my rspec tests (which is what I'm having trouble with)... I'm trying to do this with a rake task. Here is my code so far: require"spec" require "spec/rake/spectask" RAILS_ENV = 'test' namespace :run_all_tests do desc "Run all of your tests" puts "Reseting test database..." system "mysql --user=root --password=dev < C:\\Brian\\Work\\Personal\\BrianSite\\database\\BrianSite_test_CreateScript.sql" puts "Filling database tables with test data..." system "mysql --user=root --password=dev < C:\\Brian\\Work\\Personal\\BrianSite\\database\\Fill_Test_Tables.sql" puts "Starting rails test environment..." task :run => :environment do puts "RAILS_ENV is #{RAILS_ENV}" # Run rspec test files here... require "spec/models/blog_spec.rb" end end I thought the require "spec/models/blog_spec.rb" would do it, but the tests aren't running. Anyone know where I'm going wrong? Thanks for any help.

    Read the article

  • My android tests don't get internet access!

    - by Malachii
    The subject says it all. My application gets internet access thanks to the android.permission.INTERNET permission, but my test cases don't while using the instrumentation test runner. This means I can't test my server IO routines in my test cases. What's up? Here's my manifest in case it helps you. Thanks! Sorry about the lack of indents - could not get it working on short notice with this site. Thanks! <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.helloandroid" android:versionCode="1" android:versionName="1.0"> <uses-permission android:name="android.permission.INTERNET"></uses-permission> <application android:icon="@drawable/icon" android:label="@string/app_name"> <uses-library android:name="android.test.runner" /> <activity android:name=".HelloAndroid" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="2" /> <instrumentation android:name="android.test.InstrumentationTestRunner" android:targetPackage="qnext.mobile.redirect" android:label="Qnext Redirect Tests" /> </manifest>

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Loading Jscript files into Firefox extension

    - by colon3l
    Hi ! Let's get directly to the problem : I'm actually doing a firefox extension in which i would like to implement the jWebsocket API in order to build a small chat. I got my main script file, named test.js, and the jWebsocket lib into a js folder. Just for you to know, this is my first firefox extension ever. So in my XUL file I got this (for the script part only of course, the interface code is not shown) : <overlay id="test-overlay" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="chrome://test/content/test.js" /> <script type="application/x-javascript" src="chrome://test/content/js/jwebsocket.js" /> jwebsocket.js being the file I need to call according to jWebsocket website. In my main script file test.js I start with : if (jws.browserSupportsWebSockets()) { jWebSocketClient = new jws.jWebSocketJSONClient(); } else { var lMsg = jws.MSG_WS_NOT_SUPPORTED; alert(lMsg); } jws being the namespace created into the jwebsocket.js file. Of course I've got the required stand-alone server running in background, and working. So from what I understood looking on various websites, is that if a js file is loaded into the javascript allocated memory space (with the tag), all namespace/function should be available between each file. But this was mostly for HTML-oriented issues, so I'm not sure if it applies to XUL/Firefox environment. But the script keep failing at the first jws call. Any ideas on what goes wrong here ? I'm stuck for 2 days now :/

    Read the article

  • g++ Linking Error on Mac while compiling FFMPEG

    - by Saptarshi Biswas
    g++ on Snow Leopard is throwing linking errors on the following piece of code test.cpp #include <iostream> using namespace std; #include <libavcodec/avcodec.h> // required headers #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); // offending library call return 0; } When I try to compile this using the following command g++ test.cpp -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I get the error Undefined symbols: "av_register_all()", referenced from: _main in ccUD1ueX.o ld: symbol(s) not found collect2: ld returned 1 exit status Interestingly, if I have an equivalent c code, test.c #include <stdio.h> #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); return 0; } gcc compiles it just fine gcc test.c -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I am using Mac OS X 10.6.5 $ g++ --version i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) FFMPEG's libavcodec, libavformat etc. are C libraries and I have built them on my machine like thus: ./configure --enable-gpl --enable-pthreads --enable-shared \ --disable-doc --enable-libx264 make && sudo make install As one would expect, libavformat indeed contains the symbol av_register_all $ nm /usr/local/lib/libavformat.a | grep av_register_all 0000000000000000 T _av_register_all 00000000000089b0 S _av_register_all.eh I am inclined to believe g++ and gcc have different views of the libraries on my machine. g++ is not able to pick up the right libraries. Any clue?

    Read the article

  • Is it possible for a function called from within an object to have access to that object's scope?

    - by Elliot Bonneville
    I can't think of a way to explain what I'm after more than I've done in the title, so I'll repeat it. Is it possible for an anonymous function called from within an object to have access to that object's scope? The following code block should explain what I'm trying to do better than I can: function myObj(testFunc) { this.testFunc = testFunc; this.Foo = function Foo(test) { this.test = test; this.saySomething = function(text) { alert(text); }; }; var Foo = this.Foo; this.testFunc.apply(this); } var test = new myObj(function() { var test = new Foo(); test.saySomething("Hello world"); }); When I run this, I get an error: "Foo is not defined." How do I ensure that Foo will be defined when I call the anonymous function? Here's a jsFiddle for further experimentation. Edit: I am aware of the fact that adding the line var Foo = this.Foo; to the anonymous function I pass in to my instance of myObj will make this work. However, I'd like to avoid having to expose the variable inside the anonymous function--do I have any other options?.

    Read the article

  • Strings exported from a module have extra line breaks

    - by Jesse Millikan
    In a DrScheme project, I'm using a MrEd editor-canvas% with text% and inserting a string from a literal in a Scheme file. This results in an extra blank line in the editor for each line of text I'm trying to insert. I've tracked this down to the apparent fact that string literals from outside modules are getting extra line breaks. Here's a full example. The editor is irrelevant at this point, but it displays the result. ; test-literals.ss (module test-literals scheme (provide (all-defined-out)) (define exported-string "From another module with some more line breaks. ")) ; editor-test.ss (module editor-test scheme (require mred "test-literals.ss") (define w (instantiate frame% ("Editor Test" #f) )) (define c (instantiate editor-canvas% (w) (line-count 12) (min-width 400))) (define editor (instantiate text% ())) (send c set-editor editor) (send w show #t) (send editor erase) (send editor insert "Some text with some line breaks. ") (send editor insert exported-string)) And the result in the editor is Some text with some line breaks. From another module with some more line breaks.

    Read the article

  • Appending facts into an existing prolog file.

    - by vuj
    Hi, I'm having trouble inserting facts into an existing prolog file, without overwriting the original contents. Suppose I have a file test.pl: :- dynamic born/2. born(john,london). born(tim,manchester). If I load this in prolog, and I assert more facts: | ?- assert(born(laura,kent)). yes I'm aware I can save this by doing: |?- tell('test.pl'),listing(born/2),told. Which works but test.pl now only contains the facts, not the ":- dynamic born/2": born(john,london). born(tim,manchester). born(laura,kent). This is problematic because if I reload this file, I won't be able to insert anymore facts into test.pl because ":- dynamic born/2." doesn't exist anymore. I read somewhere that, I could do: append('test.pl'),listing(born/2),told. which should just append to the end of the file, however, I get the following error: ! Existence error in user:append/1 ! procedure user:append/1 does not exist ! goal: user:append('test.pl') Btw, I'm using Sicstus prolog. Does this make a difference? Thanks!

    Read the article

  • "One or more breakpoints cannot be set and have been disabled. Execution will stop at the beginning

    - by sam
    I set a breakpoint in my code in Visual-C++, but when I run, I see the error mentioned in the title. I know this question has been asked before on Stack Overflow (http://stackoverflow.com/questions/657470/breakpoints-cannot-be-set-and-have-been-disabled-problem), but none of the answers there fully explained the problem I'm seeing. The closest I can see is something about the linker, but I don't understand that - so if someone could explain in more detail that would be great. In my case, I have 2 projects in Visual C++ - the production dsw, and the test code dsw. I have loaded and rebuilt both dsws in debug mode. I want a breakpoint in the production code, which is run via the test scripts. My issue is I get the error message when I run the test code, because the break point is in the production code, which isn't loaded up when the test starts. Near the beginning of the test script there is a mytest_initialize() command. I imagine this goes off and loads up the production dll. Once this line has executed, I can put the breakpoint in my production code and run until I hit it. But it's quite annoying to have to run to this line, set the breakpoint and continue every time I want to run the test. So I think the problem is Visual C++ doesn't realise the two projects are related. Is this a linker issue? What does the linker do and what settings should I change to make this work? Thanks in advance. Apologies if instead I should be appending this question to the existing one, this is my first post so not quite sure how this should work.

    Read the article

  • How to convert many thousands of lines of VBScript to C#?

    - by Ross Patterson
    I have a collection of about 10,000 small VBScript programs (50-100 lines each) and a small collection of larger ones, and I'm looking for a way to convert them to C# without resorting to by-hand transliteration. The programs are automated test cases for a web application, written for HP/Mercury's QuickTest Pro, and I'm trying to turn them into test cases for Selenium. Luckily, the tests appear to be well-written, using a library of building blocks and idioms (the larger programs), so the test cases actually resemble a domain-specific language more than they do VBScript, and the QTP-ness is well-buried inside the libraries. Ideally, what I'm searching for is a tool that can do the syntactic transformation from VBScript to C# for both the dsl-ish test cases and also the more complicated building-block libraries. That would leave me with a manual cleanup of the libraries, and probably very little work on the test cases. If I could find a VBScript-to-VB.NET translator, I'd take that also, as I suspect I could compile the VB.NET and then de-compile to C# using .NET Relector or something similar. Plan B is to write a translator of my own for the test cases, since they're in a very straight-line style, but it wouldn't help with the libraries. Any suyggestions? I haven't written a compiler in at least 15 years, and while I haven't forgotten how, I'm not looking forward to it - least of all for VBScript!

    Read the article

  • Creating parallel selenium tests in C# and using Nunit as the runner application

    - by damianmartin
    I am writing a new test suite for the company to test a very complex ASP.NET application which is heavily AJAX driven. We have decided to use Selenium (Grid & Remote Control) and Nunit to run these tests. The actually tests are dynamically created at run time from a spreadsheet. Each Column in an excel spreadsheet relates to a new test and each row relates to a selenium command (but in plain English and the dll converts this into Selenium code). My problem i have at the moment is getting the tests running in parallel. There will be 1000+ tests so it is too time consuming to have 1 test run at a time. Selenium Grid and Selenium Remote Control(s) are setup correctly because I can run there demo. From what i have read I need to use Punit but i can not find any documentation on what a test in punit should look like. Nunit tests are [SetUp] [TearDown] [Test]. Can anyone point me in the right direction. Thanks in advance.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >