Search Results

Search found 23949 results on 958 pages for 'test'.

Page 117/958 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • RasterizerState set to null after calling DrawText in Nuclex

    - by ProgrammerAtWork
    I have the following code in XNA: // class members Text t1; Text t2; Text t3; // init // Debugfont is size 24 vectorfont t1 = MM.DebugFont24.Fill("hello"); t1 = MM.DebugFont24.Extrude("hello"); t2 = MM.DebugFont24.Fill("hello"); t2 = MM.DebugFont24.Extrude("hello"); t3 = MM.DebugFont24.Fill("hello"); t3 = MM.DebugFont24.Extrude("hello"); // Draw TextBatch test = new TextBatch(MM.GD); test.DrawText(t1, Color.Red); test.DrawText(t2, Color.Red); test.DrawText(t3, Color.Red); test.End(); //After the second call to the TextBatch, RasterizerState of the GraphicsDevice is set to null //But I don't get any runtime errors or any indication of that something is wrong. Is this supposed to happen? Or am I doing something wrong? I've discovered that this happened because culling was set to None when I was rendering textures

    Read the article

  • SQL Server Unit Testing with tSQLt

    When one considers the amount of time and effort that Unit Testing consumes for the Database Developer, is surprising how few good SQL Server Test frameworks are around. tSQLt , which is open source and free to use, is one of the frameworks that provide a simple way to populate a table with test data as part of the unit test, and check the results with what should be expected. Sebastian and Dennis, who created tSQLt, explain.

    Read the article

  • Opengl + SDL linking error

    - by me2loveit2
    I am trying to load an image as a texture with opengl using c++ in visual studio 2010. I researched a couple hours online and found the SDL library, then I implemented a simple example and got some linking error I can not seem to figure out. The error log is here: 1Build started 10/20/2012 12:09:17 AM. 1InitializeBuildStatus: 1 Touching "Debug\texture mapping test.unsuccessfulbuild". 1ClCompile: 1 All outputs are up-to-date. 1 texture mapping test.cpp 1ManifestResourceCompile: 1 All outputs are up-to-date. 1texture mapping test.obj : error LNK2019: unresolved external symbol _IMG_Load referenced in function "void __cdecl display(void)" (?display@@YAXXZ) 1MSVCRTD.lib(crtexe.obj) : error LNK2019: unresolved external symbol main referenced in function __tmainCRTStartup 1C:\Users\Me\Documents\Visual Studio 2010\Projects\Programming projects\texture mapping test\Debug\texture mapping test.exe : fatal error LNK1120: 2 unresolved externals 1 1Build FAILED. 1 1Time Elapsed 00:00:02.45 ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Can someone please help me!! I am at a desperate point right now. I downloaded the SDL, and copied all the .h file into: C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Include I added the .lib (x86) files into://as a not i tried the (x64) file too but got the exact same error C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Lib and the .dll(x86) into: C:\Windows\System32 For implementing textures, I used the simple sample code from: http://www.sdltutorials.com/sdl-tip-sdl-surface-to-opengl-texture Please let me know if you can see me doing something wrong, or know how I can fix this!! Thanks Phil

    Read the article

  • What is the most appropriate testing method in this scenario?

    - by Daniel Bruce
    I'm writing some Objective-C apps (for OS X/iOS) and I'm currently implementing a service to be shared across them. The service is intended to be fairly self-contained. For the current functionality I'm envisioning there will be only one method that clients will call to do a fairly complicated series of steps both using private methods on the class, and passing data through a bunch of "data mangling classes" to arrive at an end result. The gist of the code is to fetch a log of changes, stored in a service-internal data store, that has occurred since a particular time, simplify the log to only include the last applicable change for each object, attach the serialized values for the affected objects and return this all to the client. My question then is, how do I unit-test this entry point method? Obviously, each class would have thorough unit tests to ensure that their functionality works as expected, but the entry point seems harder to "disconnect" from the rest of the world. I would rather not send in each of these internal classes IoC-style, because they're small and are only made classes to satisfy the single-responsibility principle. I see a couple possibilities: Create a "private" interface header for the tests with methods that call the internal classes and test each of these methods separately. Then, to test the entry point, make a partial mock of the service class with these private methods mocked out and just test that the methods are called with the right arguments. Write a series of fatter tests for the entry point without mocking out anything, testing the entire functionality in one go. This looks, to me, more like "integration testing" and seems brittle, but it does satisfy the "only test via the public interface" principle. Write a factory that returns these internal services and take that in the initializer, then write a factory that returns mocked versions of them to use in tests. This has the downside of making the construction of the service annoying, and leaks internal details to the client. Write a "private" initializer that take these services as extra parameters, use that to provide mocked services, and have the public initializer back-end to this one. This would ensure that the client code still sees the easy/pretty initializer and no internals are leaked. I'm sure there's more ways to solve this problem that I haven't thought of yet, but my question is: what's the most appropriate approach according to unit testing best practices? Especially considering I would prefer to write this test-first, meaning I should preferably only create these services as the code indicates a need for them.

    Read the article

  • Making WatiN Wait for JQuery document.Ready() Functions to Complete

    - by Steve Wilkes
    WatiN's DomContainer.WaitForComplete() method pauses test execution until the DOM has finished loading, but if your page has functions registered with JQuery's ready() function, you'll probably want to wait for those to finish executing before testing it. Here's a WatiN extension method which pauses test execution until that happens. JQuery (as far as I can see) doesn't provide an event or other way of being notified of when it's finished running your ready() functions, so you have to get around it another way. Luckily, because ready() executes the functions it's given in the order they're registered, you can simply register another one to add a 'marker' div to the page, and tell WatiN to wait for that div to exist. Here's the code; I added the extension method to Browser rather than DomContainer (Browser derives from DomContainer) because it's the sort of thing you only execute once for each of the pages your test loads, so Browser seemed like a good place to put it. public static void WaitForJQueryDocumentReadyFunctionsToComplete(this Browser browser) { // Don't try this is JQuery isn't defined on the page: if (bool.Parse(browser.Eval("typeof $ == 'function'"))) { const string jqueryCompleteId = "jquery-document-ready-functions-complete"; // Register a ready() function which adds a marker div to the body: browser.Eval( @"$(document).ready(function() { " + "$('body').append('<div id=""" + jqueryCompleteId + @""" />'); " + "});"); // Wait for the marker div to exist or make the test fail: browser.Div(Find.ById(jqueryCompleteId)) .WaitUntilExistsOrFail(10, "JQuery document ready functions did not complete."); } } The code uses the Eval() method to send JavaScript to the browser to be executed; first to check that JQuery actually exists on the page, then to add the new ready() method. WaitUntilExistsOrFail() is another WatiN extension method I've written (I've ended up writing really quite a lot of them) which waits for the element on which it is invoked to exist, and uses Assert.Fail() to fail the test with the given message if it doesn't exist within the specified number of seconds. Here it is: public static void WaitUntilExistsOrFail(this Element element, int timeoutInSeconds, string failureMessage) { try { element.WaitUntilExists(timeoutInSeconds); } catch (WatinTimeoutException) { Assert.Fail(failureMessage); } }

    Read the article

  • Adding a new automatic number sequence

    - by Paul
    We write test case documents. In these documents, each test case is numbered. E.g. Foobar-UI-1 to Foobar-UI-23 or Foobar-Device-1 to Foobar-Device-87 I'd like to autonumber these. I don't think I want just a new numbered list format, I want something like the list of figures - where figures (or test case) can be defined anywhere in the doc with other headings and paragraphs between them, and I can insert a "List of figures" table at the beginning. So how do I do "test cases" and a "list of test-cases" table in the same way as figures work out of the box?

    Read the article

  • Exchange 2010 Autodiscover/OAB update issue

    - by bulldog5046
    Mid way through migration from 2003 to 2010 and with a few test users on 2010 i've noticed that the OAB is not being downloaded to outlook clients. I've checked the URL's are configured, addded both our CAS servers to the web based distribution list for the OAB and assigned the OAB to 2 mailbox databases we use but when i use outlook 'Test E-Mail AutoConfiguration' test i still see that the autodiscover says "OAB URL: Public Folder" even though i've now deselected the option. I've ran Test-OutlookWebServices to which i was getting an OAB error about no URL in the autodiscovery but having just re-ran it now appears fine, yey the autoconfigure test still does not. Does anyone have any idea why i'm getting this discrepency?

    Read the article

  • New release for the Visual Studio 2010 and .NET Framework 4 Training Kit

    - by Enrique Lima
    Among the new content in the release, is a set of ALM docs and labs. The ALM content referenced above is: o Using Code Analysis with Visual Studio 2010 to Improve Code Quality o Introduction to Exploratory Testing with Microsoft Test Manager 2010 o Introduction to Platform Testing with Microsoft Test Manager 2010 o Introduction to Quality Tracking with Visual Studio 2010 o Introduction to Test Planning with Microsoft Test Manager 2010 All ALM labs point to the latest version of the VS 2010 RTM VM. You can download the Training Kit from :  http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23507 Visit the online content: http://msdn.microsoft.com/en-us/VS2010TrainingCourse Download the most recent version of the Visual Studio: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=240

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • Selenium &ndash; Use Data Driven tests to run in multiple browsers and sizes

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/04/selenium-ndash-use-data-driven-tests-to-run-in-multiple.aspxSelenium uses WebDriver (or is it the same? I’m still learning how it is connected) to run Automated UI tests in many different browsers. For example, you can run the same test in Chrome and Firefox and in a smaller sized Chrome browser. The permutations can grow quickly. One way to get them to run in MStest is to create  a test method for each test (ie ChromeDeleteItem_Small, ChromeDeleteItem_Large, FFDeleteItem_Small, FFDeleteItem_Large) that each call  the same base method, passing in the browser and size you’d like. This approach was causing a lot of duplicate code, so I decided to use the data driven approach, common to Coded UI or Unit test methods. 1. Create a class with a test method. 2. Create a csv with two columns: BrowserType, BrowserSize 3. Add rows for each permutation: Chrome, Large | Chrome, Small | Firefox, Large | Firefox, Small | IE, Large | IE, Small | *** 4. Add the csv to the Visual Studio Project. 5. Set the Copy to output directory to Copy always 6. Add the attribute: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] 7. Run the test in the test explorer Example:[CodedUITest] public class AllTasksTests : TasksTestBase { [TestMethod] [TestCategory("Tasks")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] public void CreateTask() { this.PrepForDataDrivenTest(); base.CreateTaskTest("New Task"); } } protected void PrepForDataDrivenTest() { var browserType = this.ParseBrowserType(Context.DataRow["BrowserType"].ToString()); var browserSize = this.ParseBrowserSize(Context.DataRow["BrowserSize"].ToString()); this.BrowserType = browserType; this.BrowserSize = browserSize; Trace.WriteLine("browser: " + browserType.ToString()); Trace.WriteLine("browser size: " + browserSize.ToString()); } /// <summary> /// Get the enum value from the string /// </summary> /// <param name="browserType">Chrome, Firefox, or IE</param> /// <returns>The browser type.</returns> private BrowserType ParseBrowserType(string browserType) { return (UITestFramework.BrowserType)Enum.Parse(typeof(UITestFramework.BrowserType), browserType, true); } /// <summary> /// Get the browser size enum value from the string /// </summary> /// <param name="browserSize">Small, Medium, Large</param> /// <returns>the browser size</returns> private BrowserSizeEnum ParseBrowserSize(string browserSize) { return (BrowserSizeEnum)Enum.Parse(typeof(BrowserSizeEnum), browserSize, true); }/// <summary> /// Change the browser to the size based on the enum. /// </summary> /// <param name="browserSize">The BrowserSizeEnum value to resize the window to.</param> private void ResizeBrowser(BrowserSizeEnum browserSize) { switch (browserSize) { case BrowserSizeEnum.Large: this.driver.Manage().Window.Maximize(); break; case BrowserSizeEnum.Medium: this.driver.Manage().Window.Size = new Size(800, this.driver.Manage().Window.Size.Height); break; case BrowserSizeEnum.Small: this.driver.Manage().Window.Size = new Size(500, this.driver.Manage().Window.Size.Height); break; default: break; } }/// <summary> /// Browser sizes for automation testing /// </summary> public enum BrowserSizeEnum { /// <summary> /// Large size, Maximized to the desktop /// </summary> Large, /// <summary> /// Similar to tablets /// </summary> Medium, /// <summary> /// Phone sizes... 610px and smaller /// </summary> Small } Hope it helps!

    Read the article

  • How to do integrated testing?

    - by Enthusiastic Programmer
    So I have been reading up on a lot of books surrounding testing. But all the books I've read have the same flaws. They will all tell you the definitions of testing. But I have not found a single book that will guide you into integration testing (or pretty much anything higher then unit testing). Is integration testing that elusive or am I reading the wrong books? I'm a hands on person, so I would appreciate it if someone could help me with a simple program: Let's say you need to make some sort of calculation program that calculates something (doesn't matter what) and exports it to *.txt file. Let's assume we use the Model View Controller design principle. And one class for the actual calculating which you'll use in the model and one for writing the textfile. So: View = Controller = Model = CalculationClass, FileClass So for unittesting: You'd test the calculationClass, I'd personally focus most of my unit tests there. And less time on unit testing the view/controller/FileClass. I personally wouldn't see the use of unittesting those unless you want a really robust program. Integration testing: Now this is where I run into a wall. What would I have to test to call it an integration test? I could stub the view and feed the controller data which it would pass on to the model and so forth. And then check what the view gets back in the end. But ... Couldn't I just run the (in this case small) program then and test it manually? Would this be considered a integration test too, or does it have to be automated? Also, can I check multiple items to see if they are correct? I cannot seem to find any book that offers a hands on approach to methods of integration testing.

    Read the article

  • Strangling the life out of Software Testing

    - by MarkPearl
    I recently did a course at the local university on Software Engineering. At the beginning of the course I looked over the outline of the subject and there seemed to be some really good content. It covered traditional & agile project methodologies, some general communication and modelling chapters and finished off with testing. I was particularly excited to see the section on testing as this was something I learnt on my own and see great value in. The course has now just ended and I am very disappointed. I now know one of the reasons why so few people i.e. in my region do Test Driven Development, or perform even basic testing methodologies. The topic was to academic! Yes, you might be able to list 4 different types of black box test approaches vs. white box test approaches and describe the characteristics of Smoke Tests, but never during course did we see an example of an actual test or how it might be implemented! In fact, if I did not have personal experience of applying testing in actual projects, I wouldn’t even know what a unit test looked like. Now, what worries me is the following… It took us 6 months to cover the course material, other students more than likely came out of that course with little appreciation of the subject – in fact they now have a very complex view of what a test is – so complex that I think most of them will never attempt it again on their own. Secondly, imagine studying to be a dentist without ever actually seeing a tooth? Yes, you might be able to describe a tooth, and know what it is made out of – but nobody would want a dentist who has never seen a tooth to operate on them. Yet somehow we expect people studying software engineering to do the same? This is not right. Now, before I finish my rant let me say that I know this is not the same everywhere in the world, and that there needs to be a balance on practical implementation and academic understanding – I am just disappointed that this does not seem to be happening at the institution that I am currently studying at ;-( Please, if you happen to be a lecturer or teacher reading this post – a combination of theory and practical's goes a long way. We need to up the quality of software being produced and that starts at learner level!

    Read the article

  • Best Way to Handle Meta Information in a SQL Database

    - by danielhanly.com
    I've got a database where I want to store user information and user_meta information. The reason behind setting it up in this way was because the user_meta side may change over time and I would like to do this without disrupting the master user table. If possible, I would like some advice on how to best set up this meta data table. I can either set it as below: +----+---------+----------+--------------------+ | id | user_id | key | value | +----+---------+----------+--------------------+ | 1 | 1 | email | [email protected] | | 2 | 1 | name | user name | | 3 | 1 | address | test address | ... Or, I can set it as below: +----+---------+--------------------+--------------------+--------------+ | id | user_id | email | name | address | +----+---------+--------------------+--------------------+--------------+ | 1 | 1 | [email protected] | user name | test address | Obviously, the top verison is more flexible, but the bottom version is space saving and perhaps more efficient, returning all the data as a single record. Which is the best way to go about this? Or, am I going about this completely wrong and there's another way I've not thought of?

    Read the article

  • Prevent email to root@domain

    - by kml
    I'm running Ubuntu Server 12.04 as a web server and use Exim4 for sending confirmation emails and such. Is there a way to set a system-wide email address for the root user? In other words, I'd like ALL email to go to a different address rather than [email protected]. For example, this command... echo "test" | mail -v -s test root ...would go to a different address, as well as all cron tasks that root executes: # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

    Read the article

  • Ubuntu - executable file - variable assignment throwing error on script run

    - by newcoder
    I am trying to run a small script - test - on ubuntu box. It is as follows: var1 = bash var2 = /home/test/directory ... ... <some more variable assignments and then program operations here> ... ... Now every time I run it, then it throws errors: root@localhost#/opt/test /opt/test: line 1: var1: command not found /opt/test: line 3: var2: command not found ... ... more similar errors ... Can someone help me understand what is wrong in this script? Many thanks.

    Read the article

  • Chrooted user does not start in his home directory and does not load his bash_profiles

    - by Stuffy
    If the users logs in, he starts in / of the chroot (Which is /var/jail on the real machine). I would like him to start in his home-dir. Also, he seems not to load any of his profile-files (.bash.rc etc). I followed this tutorial to create the chroot environment. This is what my /etc/passwd looks like: test:x:1004:1008:,,,:/var/jail/home/test:/bin/bash this is what my /var/jail/etc/passwd file looks like: test:x:1004:1008:,,,:/home/test:/bin/bash I also found out that, if I remove Match User test ChrootDirectory /var/jail AllowTCPForwarding no X11Forwarding no from my /etc/ssh/sshd_config, the user starts in his correct home-folder and with his bash-settings loaded. However, he is able to leave the chroot-environment if I remove that part. This question I asked before is somewhat related, since I think the wrong look of the commandline is caused from the not loaded profile-files. So any ideas how to fix this?

    Read the article

  • apache 2.2: how can I set up a VirtualHost inside the RootDirectory?

    - by redraw
    I want to set up a VirtualHost inside the RootDirectory. For example, My project is in C:/myproject and I want to access with http://localhost/myproject EDIT: I've made an alias inside the httpd-vhosts.conf, however I don't have permissions. <VirtualHost *:80> DocumentRoot "C:/apache-2.2/htdocs" ServerName localhost Alias /test "D:\arbol\documentos\test" </VirtualHost> Is this code below the proper way to give permissions? <VirtualHost *:80> DocumentRoot "C:/apache-2.2/htdocs" ServerName localhost Alias /test "D:\arbol\documentos\test" <Directory "D:\arbol\documentos\test"> allow from all order allow,deny AllowOverride All </Directory> </VirtualHost>

    Read the article

  • Qt Linker Errors

    - by Kyle Rozendo
    Hi All, I've been trying to get Qt working (QCreator, QIde and now VS2008). I have sorted out a ton of issues already, but I am now faced with the following build errors, and frankly I'm out of ideas. Error 1 error LNK2019: unresolved external symbol "public: void __thiscall FileVisitor::processFileList(class QStringList)" (?processFileList@FileVisitor@@QAEXVQStringList@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 2 error LNK2019: unresolved external symbol "public: void __thiscall FileVisitor::processEntry(class QString)" (?processEntry@FileVisitor@@QAEXVQString@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 3 error LNK2019: unresolved external symbol "public: class QString __thiscall ArgumentList::getSwitchArg(class QString,class QString)" (?getSwitchArg@ArgumentList@@QAE?AVQString@@V2@0@Z) referenced in function _main codevisitor-test.obj Question1 Error 4 error LNK2019: unresolved external symbol "public: bool __thiscall ArgumentList::getSwitch(class QString)" (?getSwitch@ArgumentList@@QAE_NVQString@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 5 error LNK2019: unresolved external symbol "public: void __thiscall ArgumentList::argsToStringlist(int,char * * const)" (?argsToStringlist@ArgumentList@@QAEXHQAPAD@Z) referenced in function "public: __thiscall ArgumentList::ArgumentList(int,char * * const)" (??0ArgumentList@@QAE@HQAPAD@Z) codevisitor-test.obj Question1 Error 6 error LNK2019: unresolved external symbol "public: __thiscall FileVisitor::FileVisitor(class QString,bool,bool)" (??0FileVisitor@@QAE@VQString@@_N1@Z) referenced in function "public: __thiscall CodeVisitor::CodeVisitor(class QString,bool)" (??0CodeVisitor@@QAE@VQString@@_N@Z) codevisitor-test.obj Question1 Error 7 error LNK2001: unresolved external symbol "public: virtual struct QMetaObject const * __thiscall FileVisitor::metaObject(void)const " (?metaObject@FileVisitor@@UBEPBUQMetaObject@@XZ) codevisitor-test.obj Question1 Error 8 error LNK2001: unresolved external symbol "public: virtual void * __thiscall FileVisitor::qt_metacast(char const *)" (?qt_metacast@FileVisitor@@UAEPAXPBD@Z) codevisitor-test.obj Question1 Error 9 error LNK2001: unresolved external symbol "public: virtual int __thiscall FileVisitor::qt_metacall(enum QMetaObject::Call,int,void * *)" (?qt_metacall@FileVisitor@@UAEHW4Call@QMetaObject@@HPAPAX@Z) codevisitor-test.obj Question1 Error 10 error LNK2001: unresolved external symbol "protected: virtual bool __thiscall FileVisitor::skipDir(class QDir const &)" (?skipDir@FileVisitor@@MAE_NABVQDir@@@Z) codevisitor-test.obj Question1 Error 11 fatal error LNK1120: 10 unresolved externals ... \Visual Studio 2008\Projects\Assignment1\Question1\Question1\Debug\Question1.exe Question1 The code is as follows: #include "argumentlist.h" #include <codevisitor.h> #include <QDebug> void usage(QString appname) { qDebug() << appname << " Usage: \n" << "codevisitor [-r] [-d startdir] [-f filter] [file-list]\n" << "\t-r \tvisitor will recurse into subdirs\n" << "\t-d startdir\tspecifies starting directory\n" << "\t-f filter\tfilename filter to restrict visits\n" << "\toptional list of files to be visited"; } int main(int argc, char** argv) { ArgumentList al(argc, argv); QString appname = al.takeFirst(); /* app name is always first in the list. */ if (al.count() == 0) { usage(appname); exit(1); } bool recursive(al.getSwitch("-r")); QString startdir(al.getSwitchArg("-d")); QString filter(al.getSwitchArg("-f")); CodeVisitor cvis(filter, recursive); if (startdir != QString()) { cvis.processEntry(startdir); } else if (al.size()) { cvis.processFileList(al); } else return 1; qDebug() << "Files Processed: %d" << cvis.getNumFiles(); qDebug() << cvis.getResultString(); return 0; } Thanks in advance, I'm simply stumped.

    Read the article

  • Ant + JUnit: NoClassDefFoundError

    - by K-Boo
    Ok, I'm frustrated! I've hunted around for a good number of hours and am still stumped. Environment: WinXP, Eclipse Galileo 3.5 (straight install - no extra plugins). So, I have a simple JUnit test. It runs fine from it's internal Eclipse JUnit run configuration. This class has no dependencies on anything. To narrow this problem down as much as possible it simply contains: @Test public void testX() { assertEquals("1", new Integer(1).toString()); } No sweat so far. Now I want to take the super advanced step of running this test case from within Ant (the final goal is to integrate with Hudson). So, I create a build.xml: <project name="Test" default="basic"> <property name="default.target.dir" value="${basedir}/target" /> <property name="test.report.dir" value="${default.target.dir}/test-reports" /> <target name="basic"> <mkdir dir="${test.report.dir}" /> <junit fork="true" printSummary="true" showOutput="true"> <formatter type="plain" /> <classpath> <pathelement path="${basedir}/bin "/> </classpath> <batchtest fork="true" todir="${test.report.dir}" > <fileset dir="${basedir}/bin"> <include name="**/*Test.*" /> </fileset> </batchtest> </junit> </target> </project> ${basedir} is the Java project name in the workspace that contains the source, classes and build file. All .java's and the build.xml are in ${basedir}/src. The .class files are in ${basedir}/bin. I have added eclipse-install-dir/plugins/org.junit4_4.5.0.v20090423/junit.jar to the Ant Runtime Classpath via Windows / Preferences / Ant / Runtime / Contributed Entries. ant-junit.jar is in Ant Home Entries. So, what happens when I run this insanely complex target? My report file contains: Testsuite: com.xyz.test.RussianTest Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec Testcase: initializationError took 0 sec Caused an ERROR org/hamcrest/SelfDescribing java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(Unknown Source) at java.security.SecureClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.defineClass(Unknown Source) at java.net.URLClassLoader.access$000(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) Caused by: java.lang.ClassNotFoundException: org.hamcrest.SelfDescribing at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) What is this org.hamcrest.SelfDescribing class? Something to do with mocks? OK, fine. But why the dependency? I'm not doing anything at all with it. This is literally a Java project with no dependencies other than JUnit. Stumped (and frustrated)!!

    Read the article

  • InputText inside Primefaces dataTable is not refreshed

    - by robson
    I need to have inputTexts inside datatable when form is in editable mode. Everything works properly except form cleaning with immediate="true" (without form validation). Then primefaces datatable behaves unpredictable. After filling in datatable with new data - it still stores old values. Short example: test.xhtml <h:body> <h:form id="form"> <p:dataTable var="v" value="#{test.list}" id="testTable"> <p:column headerText="Test value"> <p:inputText value="#{v}"/> </p:column> </p:dataTable> <h:dataTable var="v" value="#{test.list}" id="testTable1"> <h:column> <f:facet name="header"> <h:outputText value="Test value" /> </f:facet> <p:inputText value="#{v}" /> </h:column> </h:dataTable> <p:dataTable var="v" value="#{test.list}" id="testTable2"> <p:column headerText="Test value"> <h:outputText value="#{v}" /> </p:column> </p:dataTable> <p:commandButton value="Clear" actionListener="#{test.clear()}" immediate="true" update=":form:testTable :form:testTable1 :form:testTable2"/> <p:commandButton value="Update" actionListener="#{test.update()}" update=":form:testTable :form:testTable1 :form:testTable2"/> </h:form> </h:body> And java: import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.annotation.PostConstruct; import javax.faces.bean.ViewScoped; import javax.inject.Inject; import javax.inject.Named; @Named @ViewScoped public class Test implements Serializable { private static final long serialVersionUID = 1L; private List<String> list; @PostConstruct private void init(){ update(); } public List<String> getList() { return list; } public void setList(List<String> list) { this.list = list; } public void clear() { list = new ArrayList<String>(); } public void update() { list = new ArrayList<String>(); list.add("Item 1"); list.add("Item 2"); } } In the example above I have 3 configurations: 1. p:dataTable with p:inputText 2. h:dataTable with p:inputText 3. p:dataTable with h:outputText And 2 buttons: first clears data, second applies data Workflow: 1. Try to change data in inputTexts in p:dataTable and h:dataTable 2. Clear data of list (arrayList of string) - click "clear" button (imagine that you click cancel on form because you don't want to store data to database) 3. Load new data - click "update" button (imagine that you are openning new form with new data) Question: Why p:dataTable with p:inputText still stores manually changed data, not the loaded ones? Is there any way to force p:dataTable to behaving like h:dataTable in this case?

    Read the article

  • Problems manipulating strings through a stdcall to a dll

    - by ibiza
    I need to create a C++ dll that will be called from another program through stdcall. What is needed : the caller program will pass an array of string to the dll and the dll should change the string values in the array. The caller program will then continue to work with these string values that came from the dll. I made a simple test project and I am obviously missing something... Here is my test C++ dll : #ifndef _DLL_H_ #define _DLL_H_ #include <string> #include <iostream> struct strStruct { int len; char* string; }; __declspec (dllexport) int __stdcall TestFunction(strStruct* s) { std::cout << "Just got in dll" << std::endl; std::cout << s[0].string << std::endl; //////std::cout << s[1].string << std::endl; /* char str1[] = "foo"; strcpy(s[0].string, str1); s[0].len = 3; char str2[] = "foobar"; strcpy(s[1].string, str2); s[1].len = 6; */ //std::cout << s[0].string << std::endl; //std::cout << s[1].string << std::endl; std::cout << "Getting out of dll" << std::endl; return 1; } #endif and here is a simple C# program that I am using to test my test dll : using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace TestStr { class Program { [DllImport("TestStrLib.dll", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.StdCall)] public static extern int TestFunction(string[] s); static void Main(string[] args) { string[] test = new string[2] { "a1a1a1a1a1", "b2b2b2b2b2" }; Console.WriteLine(test[0]); Console.WriteLine(test[1]); TestFunction(test); Console.WriteLine(test[0]); Console.WriteLine(test[1]); Console.ReadLine(); } } } And here is the output produced : a1a1a1a1a1 b2b2b2b2b2 Just got in dll b2b2b2b2b2 Getting out of dll a1a1a1a1a1 b2b2b2b2b2 I have some questions : 1) Why is it outputting the element in the second position of the array rather than in the first position?? 2) If I uncomment the line commented with ////// in the dll file, the program crashes. Why? 3) Obviously I wanted to do more things in the dll (the parts in /* */) than what it does right now, but I am blocked by the first 2 questions... Thanks for all your help

    Read the article

  • Advantage database throws an exception when attempting to delete a record with a like statement used

    - by ChrisR
    The code below shows that a record is deleted when the sql statement is: select * from test where qty between 50 and 59 but the sql statement: select * from test where partno like 'PART/005%' throws the exception: Advantage.Data.Provider.AdsException: Error 5072: Action requires read-write access to the table How can you reliably delete a record with a where clause applied? Note: I'm using Advantage Database v9.10.1.9, VS2008, .Net Framework 3.5 and WinXP 32 bit using System.IO; using Advantage.Data.Provider; using AdvantageClientEngine; using NUnit.Framework; namespace NetworkEidetics.Core.Tests.Dbf { [TestFixture] public class AdvantageDatabaseTests { private const string DefaultConnectionString = @"data source={0};ServerType=local;TableType=ADS_CDX;LockMode=COMPATIBLE;TrimTrailingSpaces=TRUE;ShowDeleted=FALSE"; private const string TestFilesDirectory = "./TestFiles"; [SetUp] public void Setup() { const string createSql = @"CREATE TABLE [{0}] (ITEM_NO char(4), PARTNO char(20), QTY numeric(6,0), QUOTE numeric(12,4)) "; const string insertSql = @"INSERT INTO [{0}] (ITEM_NO, PARTNO, QTY, QUOTE) VALUES('{1}', '{2}', {3}, {4})"; const string filename = "test.dbf"; var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var transaction = connection.BeginTransaction()) { using (var command = connection.CreateCommand()) { command.CommandText = string.Format(createSql, filename); command.Transaction = transaction; command.ExecuteNonQuery(); } transaction.Commit(); } using (var transaction = connection.BeginTransaction()) { for (var i = 0; i < 1000; ++i) { using (var command = connection.CreateCommand()) { var itemNo = string.Format("{0}", i); var partNumber = string.Format("PART/{0:d4}", i); var quantity = i; var quote = i * 10; command.CommandText = string.Format(insertSql, filename, itemNo, partNumber, quantity, quote); command.Transaction = transaction; command.ExecuteNonQuery(); } } transaction.Commit(); } connection.Close(); } } [TearDown] public void TearDown() { File.Delete("./TestFiles/test.dbf"); } [Test] public void CanDeleteRecord() { const string sqlStatement = @"select * from test"; Assert.AreEqual(1000, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(999, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordBetween() { const string sqlStatement = @"select * from test where qty between 50 and 59"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordWithLike() { const string sqlStatement = @"select * from test where partno like 'PART/005%'"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } public int GetRecordCount(string sqlStatement) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); return reader.GetRecordCount(AdsExtendedReader.FilterOption.RespectFilters); } } } public void DeleteRecord(string sqlStatement, int rowIndex) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); reader.GotoBOF(); reader.Read(); if (rowIndex != 0) { ACE.AdsSkip(reader.AdsActiveHandle, rowIndex); } reader.DeleteRecord(); } connection.Close(); } } } }

    Read the article

  • Tracking changes in Entity Framework 4.0 using POCO Dynamic Proxies across multiple data contexts.

    - by Rob Packwood
    I started messing with EF 4.0 because I am curious about the POCO possibilities... I wanted to simulate disconnected web environment and wrote the following code to simulate this: Save a test object in the database. Retrieve the test object Dispose of the DataContext associated with the test object I used to retrieve it Update the test object Create a new data context and persist the changes on the test object that are automatically tracked within the DynamicProxy generated against my POCO object. The problem is that when I call dataContext.SaveChanges in the Test method above, the updates are not applied. The testStore entity shows a status of "Modified" when I check its EntityStateTracker, but it is no longer modified when I view it within the new dataContext's Stores property. I would have thought that calling the Attach method on the new dataContext would also bring the object's "Modified" state over, but that appears to not be the case. Is there something I am missing? I am definitely working with self-tracking POCOs using DynamicProxies. private static void SaveTestStore(string storeName = "TestStore") { using (var context = new DataContext()) { Store newStore = context.Stores.CreateObject(); newStore.Name = storeName; context.Stores.AddObject(newStore); context.SaveChanges(); } } private static Store GetStore(string storeName = "TestStore") { using (var context = new DataContext()) { return (from store in context.Stores where store.Name == storeName select store).SingleOrDefault(); } } [Test] public void Test_Store_Update_Using_Different_DataContext() { SaveTestStore(); Store testStore = GetStore(); testStore.Name = "Updated"; using (var dataContext = new DataContext()) { dataContext.Stores.Attach(testStore); dataContext.SaveChanges(SaveOptions.DetectChangesBeforeSave); } Store updatedStore = GetStore("Updated"); Assert.IsNotNull(updatedStore); }

    Read the article

  • Build 32-bit with 64-bit llvm-gcc

    - by Jay Conrod
    I have a 64-bit version of llvm-gcc, but I want to be able to build both 32-bit and 64-bit binaries. Is there a flag for this? I tried passing -m32 (which works on the regular gcc), but I get an error message like this: [jay@andesite]$ llvm-gcc -m32 test.c -o test Warning: Generation of 64-bit code for a 32-bit processor requested. Warning: 64-bit processors all have at least SSE2. /tmp/cchzYo9t.s: Assembler messages: /tmp/cchzYo9t.s:8: Error: bad register name `%rbp' /tmp/cchzYo9t.s:9: Error: bad register name `%rsp' ... This is backwards; I want to generate 32-bit code for a 64-bit processor! I'm running llvm-gcc 4.2, the one that comes with Ubuntu 9.04 x86-64. EDIT: Here is the relevant part of the output when I run llvm-gcc with the -v flag: [jay@andesite]$ llvm-gcc -v -m32 test.c -o test.bc Using built-in specs. Target: x86_64-linux-gnu Configured with: ../llvm-gcc4.2-2.2.source/configure --host=x86_64-linux-gnu --build=x86_64-linux-gnu --prefix=/usr/lib/llvm/gcc-4.2 --enable-languages=c,c++ --program-prefix=llvm- --enable-llvm=/usr/lib/llvm --enable-threads --disable-nls --disable-shared --disable-multilib --disable-bootstrap Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5546) (LLVM build) /usr/lib/llvm/gcc-4.2/libexec/gcc/x86_64-linux-gnu/4.2.1/cc1 -quiet -v -imultilib . test.c -quiet -dumpbase test.c -m32 -mtune=generic -auxbase test -version -o /tmp/ccw6TZY6.s I looked in /usr/lib/llvm/gcc-4.2/libexec/gcc hoping to find another binary, but the only directory there is x86_64-linux-gnu. I will probably look at compiling llvm-gcc from source with appropriate options next.

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >