Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 132/192 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Sorting 1000-2000 elements with many cache misses

    - by Soylent Graham
    I have an array of 1000-2000 elements which are pointers to objects. I want to keep my array sorted and obviously I want to do this as quick as possible. They are sorted by a member and not allocated contiguously so assume a cache miss whenever I access the sort-by member. Currently I'm sorting on-demand rather than on-add, but because of the cache misses and [presumably] non-inlining of the member access the inner loop of my quick sort is slow. I'm doing tests and trying things now, (and see what the actual bottleneck is) but can anyone recommend a good alternative to speeding this up? Should I do an insert-sort instead of quicksorting on-demand, or should I try and change my model to make the elements contigious and reduce cache misses? OR, is there a sort algorithm I've not come accross which is good for data that is going to cache miss?

    Read the article

  • list all likes to a certain post coming from my friends

    - by Max Favilli
    I know I can get posts (paginated, 25 at times) of a certain user with this: https://graph.facebook.com/575128756/posts And the likes of a certain post (paginated, 25 at times) with this GRAPH api call (I omit the first part of the url): /575128756_176517292390301/likes But if I want to get the likes from friends of the post user only? FQL could be an alternative (from table like and subqueries for friend and stream), but FQL seems so buggy in my tests I don't feel comfortable using it in a productive system. Is there anyway to achieve my target using the GRAPH api?

    Read the article

  • Radio Buttons not highlighting as though they are selected

    - by Ryan
    I'm working on an android activity with a RadioGroup containing 10 RadioButtons. For some reason or another sometimes, only sometimes, when you select a RadioButton in the RadioGroup it doesn't highlight as if it's selected but through some tests I've determined that it really IS selected even though it isn't highlighted. Another odd thing that occurs is that when you select any other RadioButton in the RadioGroup and then try selecting your original Radio Button (the one that wouldn't highlight as though it were selected) it does highlight and functions as normal. Any idea why this is happening or how to fix it? Thanks!

    Read the article

  • How to configure .NET test assembly to use website web.config?

    - by Morten Christiansen
    I've run into a problem setting up Selenium tests for an ASP.NET MVC project in cases where I need the settings provided in the web.config of the site under test. The problem is that I want to create a dummy user before running the test and this causes an error saying that the password-answer supplied is invalid. This is due to the test assembly not using the web.config, instead using default values for membership configuration. I've tried to copy the relevant section (membership configuration) into the app.config of the assembly without luck, but I admit I'm just grasping at straws here.

    Read the article

  • Creating TCP network errors for unit testing

    - by Robert S. Barnes
    I'd like to create various network errors during testing. I'm using the Berkely sockets API directly in C++ on Linux. I'm running a mock server in another thread from within Boost.Test which listens on localhost. For instance, I'd like to create a timeout during connect. So far I've tried not calling accept in my mock server and setting the backlog to 1, then making multiple connections, but all seem to successfully connect. I would think that if there wasn't room in the backlog queue I would at least get a connection refused error if not a timeout. I'd like to do this all programatically if possible, but I'd consider using something external like IPchains to intentionally drop certain packets to certain ports during testing, but I'd need to automate creating and removing rules so I could do it from within my Boost.Test unit tests. I suppose I could mock the various system calls involved, but I'd rather go through a real TCP stack if possible. Ideas?

    Read the article

  • Propor usage of double and single quotes?

    - by Phox
    I'm talking about the performance increase here. From all I know you can echo variables in double quotes ("), like so: <?php echo "You are $yourAge years old"; ?> But single quotes will just return You are $yourAge years old. But what about performance differences? I've always gone by the rule that single quotes are faster because the PHP interpreter doesn't have to search through the string for variables. But I'm seeing more and more blog and forum posts on the web saying differently. Does anyone actually have any information on this subject? Perhaps benchmark tests or something? Cheers.

    Read the article

  • What's wrong with consuming ConfiguredTaskAwaitable from PortableClassLibrary's class under Debugger from MSTest Runner or Console App?

    - by Stas Shusha
    *Its only Debug-time error, but a very weird one. Problem: While running with Debugger attached and calling a method, exposed in separate Portable library, returning ConfiguredTaskAwaitable, we get InvalidProgramException. Repro: Having 2 projects: PortableClassLibrary (supporting .Net 4.5; Windows Store; Windows Phone 8) with 1 class: public class Weird { public static ConfiguredTaskAwaitable GetConfiguredTaskAwaitable() { return new ConfiguredTaskAwaitable(); } } ConsoleApplication with code: static void Main(string[] args) { Weird.GetConfiguredTaskAwaitable(); } Notes: replacing ConfiguredTaskAwaitable with ConfiguredTaskAwaitable<T> (a generic version) fixes this strange issue consuming this method form WP8 or Win8 app under Debugger works fine. Currently it causes problems cause I cant run my Unit Tests under Debugger. I'm forced to change my "ObjectUnderTest" implementation to return generic ConfiguredTaskAwaitable<T>, which is fine for the real project, but still is only a workaround. The Question is: does anybody knows the reason of this error? It definitely related to Portable Class Library magic.

    Read the article

  • How to pass binaries build upstream to a remote downstream build slave

    - by sbi
    We're using hudson on Windows to build a .NET solution and run the unit tests (NUnit). Hudson is thereby used to start batch files that do the actual work. I am now trying to set up a new test that is to run on a build slave and will run very long. The test should use the binaries produced by the upstream build. I have searched the hudson documentation but I cannot find how to pass upstream build artifacts to downstream slaves. How do I do this?

    Read the article

  • Rails test across multiple environments

    - by DSimon
    Is there some way to change Rails environments mid-way through a test? Or, alternately, what would be the right way to set up a test suite that can start up Rails in one environment, run the first half of my test in it, then restart Rails in another environment to finish the test? The two environments have separate databases. Some necessary context: I'm writing a Rails plugin that allows multiple installations of a Rails app to communicate with each other with user assistance, so that a user without Internet access can still use the app. They'll run a local version of an app, and upload their work to the online app by saving a file to a thumbdrive and taking it to an Internet cafe. The plugin adds two special environments to Rails: "offline-production" and "offline-test". I want to write functional tests that involve both the "test" and "offline-test" environments, to represent the main online version of the app and the local offline version of the app respectively.

    Read the article

  • C# Unit Testing - Generating Mock DataContexts / LINQ -> SQL classes

    - by gav
    Hi All, I am loving the new world that is C#, I've come to a point with my toy programs where I want to start writing some unit tests. My code currently uses a database via a DatabaseDataContext object (*.dbml file), what's the best way to create a mock for this object? Given how easy it is to generate the database LINQ - SQL code and how common a request this must be I'm hoping that VS2010 has built in functionality to help with testing. If I'm way off and this must be done manually could you please enlighten me as to your preferred approach? Many Thanks, Gavin

    Read the article

  • Is there a way to extract the message from a JavaScript dialog in Chrome?

    - by Samuel
    I’ve been working on an extension for automating tests in Chrome, and I came across an obscure issue with JavaScript dialogs. The message shown in the dialog can’t be readily retrieved/copied. I’ve used the GetWindowText and InternalGetWindowText functions, but they only return the title of the dialog and the text from the buttons, not the actual message itself. I even looked at programs that extract text from forms, but no luck. So does anyone know of a way to retrieve the text from these JavaScript dialogs in Chrome?

    Read the article

  • What is the best way to automated (integration) test with java with OpenId4Java against real OpenIdP

    - by mP
    I would like to do a bit more than manually test my openid glue code which happens to use the openid4java library. My goals would be to be able to run it within my IDE with a bunch of tests using Junit or similar. Selenium & tomcat I was thinking of using selenium and a tomcat but thats not exactly a nice approach as this is a bit heavy and not really lightweight. httpunit A solution with http-unit is incomplete because it doesnt really fit with the redirect to my openid provider, authenticate and redirect back. Perhaps i am wrong but this looks like it could get quite involved just to make sure this works. Mocks My last solution is to mock everything and assume thats its accurate and works. If google or yahoo ever change in some way, then ill have to verify manually. The approach is simple but with a major flaw.

    Read the article

  • Hudson results one step behind

    - by kaerast
    I'm using the filesytem plugin for Hudson, and when a build happens it looks for new/modified files, copies them to the Workspace, runs tests using Rake, and then publishes those junit xml result files. However, the updated junit xml result files don't get pushed to the workspace until the next build. This means that when the publishing of the junit xml result files happens, it's always one step behind. And this means I need to run a build twice before the results show. The Rake task is creating the junit xml files in the project directory. I've tried outputting to the workspace directory, but it seems to make things worse and the results don't get published at all. Am I doing something fundamentally wrong here? Is there a simple way of getting those junit xml results pushed to the workspace so that the post-build "Publish JUnit test result report" actually runs against the newly created xml files?

    Read the article

  • How can I get an image too big from a server?

    - by Daniel Calderon Mori
    I'm currenty developing for blackberry and just bumped into this problem as i was trying to download an image from a server. The servlet which the device communicates with is working correctly, as I have made a number of tests for it. But it gives me the 413 HTTP error ("Request entity too large"). I figure i will just get the bytes, uhm, portion by portion. How can i accomplish this? This is the code of the servlet (the doGet() method): try { ImageIcon imageIcon = new ImageIcon("c:\\Users\\dcalderon\\prueba.png"); Image image = imageIcon.getImage(); PngEncoder pngEncoder = new PngEncoder(image, true); output.write(pngEncoder.pngEncode()); } finally { output.close(); } Thanks. It's worth mentioning that I am developing both the client-side and the server-side.

    Read the article

  • Start diving into large open source projetcs

    - by Vanangamudi
    How to start learning and reading the source of large and complex projects like Blender3D and Gimp, for instance. Since the developers busy improving it and there is no docs exist at present, how do we start developing and customizing these projects. Linux kernel deserve to have several books on its code, also these kind of project do deserve the same. And there are no unit tests available for this kind of projects. Say I'm going to read and understand the source code blender. How do I start. How to setup the development environment for developing the app? If it includes several dependencies, and assume that their source code also available how to setup this kind of inter-related, coherent source code to debug?

    Read the article

  • Get PropertyInfo from property instead of name

    - by Sam
    Say, for example, I've got this simple class: public class MyClass { public String MyProperty { get; set; } } The way to get the PropertyInfo for MyProperty would be: typeof(MyClass).GetProperty("MyProperty"); This sucks! Why? Easy: it will break as soon as I change the Name of the Property, it needs a lot of dedicated tests to find every location where a property is used like this, refactoring and usage trees are unable to find these kinds of access. Ain't there any way to properly access a property? Something, that is validated on compile time? I'd love a command like this: propertyof(MyClass.MyProperty);

    Read the article

  • Python: How to run unittest.main() for all source files in a subdirectory?

    - by Pete
    I am developing a Python module with several source files, each with its own test class derived from unittest right in the source. Consider the directory structure: dirFoo\ test.py dirBar\ __init__.py Foo.py Bar.py To test either Foo.py or Bar.py, I would add this at the end of the Foo.py and Bar.py source files: if __name__ == "__main__": unittest.main() And run Python on either source, i.e. $ python Foo.py ........... ---------------------------------------------------------------------- Ran 11 tests in 2.314s OK Ideally, I would have "test.py" automagically search dirBar for any unittest derived classes and make one call to "unittest.main()". What's the best way to do this in practice? I tried using Python to call execfile for every *.py file in dirBar, which runs once for the first .py file found & exits the calling test.py, plus then I have to duplicate my code by adding unittest.main() in every source file--which violates DRY principles.

    Read the article

  • How can I access Android private API's which doesn't exposed in TelephonyManager?

    - by Micha Valach
    Hi, I'm newbie in Android. I intend to write tests related the Phone and Direct SIM write. What are the alternatives in case the required API's does not exposed in TelephonyManager but exist as private APIs in PhoneBase.java or PhoneFactory.java or CommandInterface.java? examples: 1. What is the "replacement" for: mPhone = PhoneFactory.getDefaultPhone(); ? 2. What is the alternative in order to access the CommandsInterface, CommandsInterface mCmdIf = ((PhoneBase)mPhone).mCM ? Thanks In Advance, Micha

    Read the article

  • NUnit with an ASP.net web site

    - by Ed Woodcock
    Hi folks, I'm currently trying to upgrade our build server at work, going from having no build server to having one! I'm using JetBrains TeamCity (having used ReSharper for a couple of years I trust their stuff), and intend to use NUnit and MSBuild. However, I've come up with an issue: it appears that it is not possible to test an ASP.net Web Site with NUnit. I had assumed it would be possible to configure it to test App_Code after build, however it seems that the only way to do tests nicely is through converting the Web Site to a Web Application (which my boss does not like the idea of). Does anyone have a suggestion as to how I could go about this? Please bear in mind that the testing needs to be able to be fired automatically from TeamCity.

    Read the article

  • waitin closes browsers for all projects that are building.

    - by Scooter
    I'm having an issue running WatiN under CruiseControl.net, where on a .forceclose, watin is closing all open browser instances. I have multiple projects running under cruisecontrol, and its not uncommon for some of those projects to be building and testing at the same time. There has been more than one occasion where watin will close the browser window for a different project, causing it to fail. In my local tests, creating my watin instance under a new process fixes this issue. But running under cruisecontrol, when doing this, I lose my IE object: Object reference not set to an instance of an object. Running CC.net as a service CC.Net server is Windows 2003 IE6 Any thoughts?

    Read the article

  • Ruby - Is there a way to overwrite the __FILE__ variable?

    - by Markus Orrelly
    I'm doing some unit testing, and some of the code is checking to see if files exist based on the relative path of the currently-executing script by using the FILE variable. I'm doing something like this: if File.directory?(File.join(File.dirname(__FILE__),'..','..','directory')) blah blah blah ... else raise "Can't find directory" end I'm trying to find a way to make it fail in the unit tests without doing anything drastic. Being able to overwrite the __ FILE __ variable would be easiest, but as far as I can tell, it's impossible. Any tips?

    Read the article

  • How to flush coverage data when my test cause app crash - For ios app

    - by Ypy
    I want to get the code coverage of my tests. So I set the settings, build an app with .gcno files and run it on simulator. It can get the coverage data successfully if there is no crash issue. But if the app crashed, I will get nothing. So how can I get the code coverage data when the app crash? In my thought, this is because it will not call __gcov_flush() method when app crash. I only add app does not run in background to my plist file, so __gcov_flush() is called only at the time I press Home button. Is there any way to call __gcov_flush() before the app crash?

    Read the article

  • Test plans and how best to write them

    - by Karim
    We're trying to figure out the best way to write tests in our test plan. Specifically, when writing a test that is meant to be used by anyone including QA staff, should the steps in the test be very specific or more broad giving the tester more leeway in how the task can be accomplished. As a very simple example, if you're testing opening a document in word processing document, should the test read: Using the mouse, open the file menu Choose "Open File..." in the file menu In the open file dialog that appears, navigate to x and double-click the document called y OR Bring up the file open dialog Open the file y Now I realize one answer is probably going to be "it depends on what you're trying to test" but I'm trying to answer a broader question here: If the test steps are too specific do we risk a) making the testing process to laborious and tedious and more importantly b) do we risk missing something because we wrote down too specific a path to achieve a goal. Alternatively, if we make it broad do we depend too much on the whims of the tester at the time and lose crucial testing of paths that are more common to customers/clients?

    Read the article

  • FileInputStream and FileOutputStream to the same file: Is a read() guaranteed to see all write()s that "happened before"?

    - by user946850
    I am using a file as a cache for big data. One thread writes to it sequentially, another thread reads it sequentially. Can I be sure that all data that has been written (by write()) in one thread can be read() from another thread, assuming a proper "happens-before" relationship in terms of the Java memory model? Is this behavior documented? EDIT: In my JDK, FileOutputSream does not override flush(), and OutputStream.flush() is empty. That's why I'm wondering... EDIT^2: The streams in question are owned exclusively by a class that I have full control of. Each stream is guaranteed to be accesses by one thread only. My tests show that it works as expected, but I'm still wondering if this is guaranteed and documented. See also this related discussion: http://chat.stackoverflow.com/rooms/17598/discussion-between-hussain-al-mutawa-and-user946850

    Read the article

  • changing the last commit message without committing newest changes

    - by Oleg2718281828
    My ideal workflow would consist of the following steps edit the code compile git commit -a -m "commit message" start running the new binaries, tests, etc. (may take 10+ minutes) start new changes, while the binaries are still running when step # 4 is finished, edit the commit message from step # 3, without committing the changes introduced in step # 5, by adding, say, "test FOO failed" I cannot use git commit -a --amend -m "new commit message", because this commits the new changes as well. I'm not sure that I want to bother with staging or branching. I wish I could just edit the commit message without committing any new changes. Is it possible?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >