Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 98/1012 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Can I setup a test server and then transfer everything to a diff. production server?

    - by Justin
    Hello, I am going to be setting up a "real" server, but it's not being shipped for another week. I was planning on setting up most of the server's functionality using an extra workstation I have. I wanted to set-up Windows Server 2003 or 2008, IIS, Terminal Services, Firewall, and Antivirus on this regular machine. I'd also be installing software like Winzip and VMWare that'll be used on the server. I can't ghost the machine, as far as I've done in the past, because the motherboard/cpu/etc. will all be different. Is there any way to export all of the "server settings" or something like that so I can move everything from test to production? Is there any software out there that does something similar to this? Some things I'm going to have to wait on such as setting up the file server completely in its raid configuration, but I'd like to get the simple server stuff and network setup out of the way. Has anyone done this before? Do I need software, open-source or not, to do this? Or maybe there's a way to export all the server settings in some way? Thanks in advance! Justin

    Read the article

  • stdout, stderr, and what else? (going insane parsing slapadd output)

    - by user64204
    I am using slapadd to restore a backup. That backup contains 45k entries which takes a while to restore so I need to get some progress update from slapadd. Luckily for me there is the -v switch which gives an output similar to this one: added: "[email protected],ou=People,dc=example,dc=org" (00003d53) added: "[email protected],ou=People,dc=example,dc=org" (00003d54) added: "[email protected],ou=People,dc=example,dc=org" (00003d55) .######## 44.22% eta 05m05s elapsed 04m spd 29.2 k/s added: "[email protected],ou=People,dc=example,dc=org" (00003d56) added: "[email protected],ou=People,dc=example,dc=org" (00003d57) added: "[email protected],ou=People,dc=example,dc=org" (00003d58) added: "[email protected],ou=People,dc=example,dc=org" (00003d59) Every N entries added, slapadd writes a progress update output line (.######## 44.22% eta 05m05s elapsed ...) which I want to keep and an output line for every entry created which I want to hide because it exposes people's email address but still want to count them to know how many users were imported The way I thought about hiding emails and showing the progress update is this: $ slapadd -v ... 2>&1 | tee log.txt | grep '########' # => would give me real-time progress update $ grep "added" log.txt | wc -l # => once backup has been restored I would know how many users were added I tried different variations of the above, and whatever I try I can't grep the progress update output line. I traced slapadd as follows: sudo strace slapadd -v ... And here is what I get: write(2, "added: \"[email protected]"..., 78added: "[email protected],ou=People,dc=example,dc=org" (00000009) ) = 78 gettimeofday({1322645227, 253338}, NULL) = 0 _######## 44.22% eta 05m05s elapsed 04m spd 29.2 k/s ) = 80 write(2, "\n", 1 ) As you can see, the percentage line isn't sent to either stdout or stderr (FYI I have validated with known working and failing commands that 2 is stderr and 1 is stdout) Q1: Where is the progress update output line going? Q2: How can I grep on it while sending stderr to a file? Additional info: I'm running Openldap 2.4.21 on ubuntu server 10.04

    Read the article

  • SRM 4 Test Fails with message for some VM : Error: A specified parameter was not correct.

    - by Setesh
    Here are my architecture : For the protected site 4 Host VSphere Enterprise Plus, each one with 2 HBAs FC connected to the switch fabric, connected to an EMC CX4-120 1 VCenter 1 SRM For the recovery site 2 Hosts Vsphere 4 1 Vcenter 1 SRM 1 CX-4-120 The CX4-120 is connected to the second CX4-120 with ISCSI and the MirrorView / Asynchronous. I synchronise for the time 6 Lun on a FC DAE, 2 on a S-ATA DAE I have allocated 30% of the amount synchronised LUN for the SNAPSHOT us, but I have allocated them only on my S-ATA II DAE. It does not make a problem, my snapshot are correctly active. All the installation is new (hardware and software), installed in January with the last files available in download. I have a strange problem, and it's random, sometimes when I run a test on my RP, some VMs have this error : Error: A specified parameter was not correct. I don't know where to look. Any help is appreciated... PS : I have checked on all the VMs, no Floppy disk or CD attached. PS2 : There is severals VMs with RDM and OCFS2 filesystems on it.

    Read the article

  • How to configure hostname for `apache22` package on FreeBSD?

    - by Eonil
    I'm configuring development & test FreeBSD machine on VM. I installed apache22 package and restarted. But the daemon does not started with this error: %apachectl start httpd: apr_sockaddr_info_get() failed for test.box httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName (13)Permission denied: make_sock: could not bind to address [::]:80 (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs % My hostname is test.box. Because this is temporary test box, it has no real domain-name. But I used 2-level name to avoid long time waiting of sshd on booting. However, I searched web, and I modified /etc/hosts file like this (I didn't touches this file before): # This is original configuration #::1 localhost localhost.my.domain #127.0.0.1 localhost localhost.my.domain # New configuration ::1 localhost test.box 127.0.0.1 localhost test.box 127.0.0.1 test.box test Now apache fails with this error message: %apachectl start httpd: Could not reliably determine the server's fully qualified domain name, using test.box for ServerName (13)Permission denied: make_sock: could not bind to address [::]:80 (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs % I don't know what's required now. Please let me know reason and solution of this error. ---- (edit) ---- The permission errors are caused from omission of sudo.

    Read the article

  • HSphere - Only sees Apache 2 Test Page after forced shutdown?

    - by Darkwoof
    Hi, I have a dedicated server running on a Dell PowerEdge 850 with CentOS 4.4 and HSphere 3.0 Patch 6 colocated at a datacenter. Last night my hosting company had to schedule a change in the power bar, and I gave the go ahead for them to shut down the server and bring it up when they are done. Since they do not have admin access to the machine, I suppose they did a forced shutdown. When the machine was brought up, I found that all my domains (and sub-domains) are now pointing to an "Apache 2 Test Page" instead of the pre-configured sites that were running prior to the shutdown. This apparently only affects the standard sites running on port 80 - my Webmin instance running at port 1000 is still accessible for example, as well as my HSphere control panel running at port 8080. I've checked the config settings using the HSphere UI for each of the sites, and didn't find anything wrong. I've also tried rebooting the server via SSH, which does not rectify the problem. I've previously done reboots with no issues; the sites would just come right back up when its done, but not this time. I'm guessing some configuration file got corrupted or overwritten this time? Anyone with experience with HSphere and can provide some advice on what's happened and how to solve it? Thanks. (I do not have an active support agreeement for HSphere since Parallels took over and increased the min. license to 200. I only had 25 license for use by family and friends.) Thanks in advance.

    Read the article

  • maximum size filesystem on my test .... approach?

    - by jocco
    Hello all I'm new at the site, and I have a question. I got this question at a test and really like to know the correct approach to solving this problem? Here is the question. In an indexed filesystem the first indexblock (inode) has 12 direct pointers and 1 pointer to an indirect indexblock. The filesystem is implemented on a disk with a diskblock-size of 1024 bytes. All pointers are 32 bit. Question: what is the maximum filesize (Kilobytes) of this filesystem? If it's possible not an just an answer but an explanation. edit: It was a multiple choice btw with 4 answers a. 13 K b. 268 K c. 524 K d. 1036 K As for my approach I only got as far as to know that 1 pointer is 32 bit Also I found something else here on the site which seems very usefull. http://stackoverflow.com/questions/2755006/understanding-the-concept-of-inodes Ok i got this far There are 12 blocks and each block is 1024 bytes. 1024 * 12 = 12288 bytes or 12 KB directly accessible. Please correct me if I'm wrong. Each pointer is 32 Bit = 4Byte And to be honest at this point I'm starting to get confused especially since my answer is way over any of my multiple choice answers.

    Read the article

  • Ajax Control Toolkit and Superexpert

    - by Stephen Walther
    Microsoft has asked my company, Superexpert Consulting, to take ownership of the development and maintenance of the Ajax Control Toolkit moving forward. In this blog entry, I discuss our strategy for improving the Ajax Control Toolkit. Why the Ajax Control Toolkit? The Ajax Control Toolkit is one of the most popular projects on CodePlex. In fact, some have argued that it is among the most successful open-source projects of all time. It consistently receives over 3,500 downloads a day (not weekends -- workdays). A mind-boggling number of developers use the Ajax Control Toolkit in their ASP.NET Web Forms applications. Why does the Ajax Control Toolkit continue to be such a popular project? The Ajax Control Toolkit fills a strong need in the ASP.NET Web Forms world. The Toolkit enables Web Forms developers to build richly interactive JavaScript applications without writing any JavaScript. For example, by taking advantage of the Ajax Control Toolkit, a Web Forms developer can add modal dialogs, popup calendars, and client tabs to a web application simply by dragging web controls onto a page. The Ajax Control Toolkit is not for everyone. If you are comfortable writing JavaScript then I recommend that you investigate using jQuery plugins instead of the Ajax Control Toolkit. However, if you are a Web Forms developer and you don’t want to get your hands dirty writing JavaScript, then the Ajax Control Toolkit is a great solution. The Ajax Control Toolkit is Vast The Ajax Control Toolkit consists of 40 controls. That’s a lot of controls (For the sake of comparison, jQuery UI consists of only 8 controls – those slackers J). Furthermore, developers expect the Ajax Control Toolkit to work on browsers both old and new. For example, people expect the Ajax Control Toolkit to work with Internet Explorer 6 and Internet Explorer 9 and every version of Internet Explorer in between. People also expect the Ajax Control Toolkit to work on the latest versions of Mozilla Firefox, Apple Safari, and Google Chrome. And, people expect the Ajax Control Toolkit to work with different operating systems. Yikes, that is a lot of combinations. The biggest challenge which my company faces in supporting the Ajax Control Toolkit is ensuring that the Ajax Control Toolkit works across all of these different browsers and operating systems. Testing, Testing, Testing Because we wanted to ensure that we could easily test the Ajax Control Toolkit with different browsers, the very first thing that we did was to set up a dedicated testing server. The dedicated server -- named Schizo -- hosts 4 virtual machines so that we can run Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, and Internet Explorer 9 at the same time (We also use the virtual machines to host the latest versions of Firefox, Chrome, Opera, and Safari). The five developers on our team (plus me) can each publish to a separate FTP website on the testing server. That way, we can quickly test how changes to the Ajax Control Toolkit affect different browsers. QUnit Tests for the Ajax Control Toolkit Introducing regressions – introducing new bugs when trying to fix existing bugs – is the concern which prevents me from sleeping well at night. There are so many people using the Ajax Control Toolkit in so many unique scenarios, that it is difficult to make improvements to the Ajax Control Toolkit without introducing regressions. In order to avoid regressions, we decided early on that it was extremely important to build good test coverage for the 40 controls in the Ajax Control Toolkit. We’ve been focusing a lot of energy on building automated JavaScript unit tests which we can use to help us discover regressions. We decided to write the unit tests with the QUnit test framework. We picked QUnit because it is quickly becoming the standard unit testing framework in the JavaScript world. For example, it is the unit testing framework used by the jQuery team, the jQuery UI team, and many jQuery UI plugin developers. We had to make several enhancements to the QUnit framework in order to test the Ajax Control Toolkit. For example, QUnit does not support tests which include postbacks. We modified the QUnit framework so that it works with IFrames so we could perform postbacks in our automated tests. At this point, we have written hundreds of QUnit tests. For example, we have written 135 QUnit tests for the Accordion control. The QUnit tests are included with the Ajax Control Toolkit source code in a project named AjaxControlToolkit.Tests. You can run all of the QUnit tests contained in the project by opening the Default.aspx page. Automating the QUnit Tests across Multiple Browsers Automated tests are useless if no one ever runs them. In order for the QUnit tests to be useful, we needed an easy way to run the tests automatically against a matrix of browsers. We wanted to run the unit tests against Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, Firefox, Chrome, and Safari automatically. Expecting a developer to run QUnit tests against every browser after every check-in is just too much to expect. It takes 20 seconds to run the Accordion QUnit tests. We are testing against 8 browsers. That would require the developer to open 8 browsers and wait for the results after each change in code. Too much work. Therefore, we built a JavaScript Test Server. Our JavaScript Test Server project was inspired by John Resig’s TestSwarm project. The JavaScript Test Server runs our QUnit tests in a swarm of browsers (running on different operating systems) automatically. Here’s how the JavaScript Test Server works: 1. We created an ASP.NET page named RunTest.aspx that constantly polls the JavaScript Test Server for a new set of QUnit tests to run. After the RunTest.aspx page runs the QUnit tests, the RunTest.aspx records the test results back to the JavaScript Test Server. 2. We opened the RunTest.aspx page on instances of Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, FireFox, Chrome, Opera, Google, and Safari. Now that we have the JavaScript Test Server setup, we can run all of our QUnit tests against all of the browsers which we need to support with a single click of a button. A New Release of the Ajax Control Toolkit Each Month The Ajax Control Toolkit Issue Tracker contains over one thousand five hundred open issues and feature requests. So we have plenty of work on our plates J At CodePlex, anyone can vote for an issue to be fixed. Originally, we planned to fix issues in order of their votes. However, we quickly discovered that this approach was inefficient. Constantly switching back and forth between different controls was too time-consuming. It takes time to re-familiarize yourself with a control. Instead, we decided to focus on two or three controls each month and really focus on fixing the issues with those controls. This way, we can fix sets of related issues and avoid the randomization caused by context switching. Our team works in monthly sprints. We plan to do another release of the Ajax Control Toolkit each and every month. So far, we have competed one release of the Ajax Control Toolkit which was released on April 1, 2011. We plan to release a new version in early May. Conclusion Fortunately, I work with a team of smart developers. We currently have 5 developers working on the Ajax Control Toolkit (not full-time, they are also building two very cool ASP.NET MVC applications). All the developers who work on our team are required to have strong JavaScript, jQuery, and ASP.NET MVC skills. In the interest of being as transparent as possible about our work on the Ajax Control Toolkit, I plan to blog frequently about our team’s ongoing work. In my next blog entry, I plan to write about the two Ajax Control Toolkit controls which are the focus of our work for next release.

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • jtreg update, March 2012

    - by jjg
    There is a new update for jtreg 4.1, b04, available. The primary changes have been to support faster and more reliable test runs, especially for tests in the jdk/ repository. [ For users inside Oracle, there is preliminary direct support for gathering code coverage data using jcov while running tests, and for generating a coverage report when all the tests have been run. ] -- jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg/. Scratch directories On platforms like Windows, if a test leaves a file open when the test is over, that can cause a problem for downstream tests, because the scratch directory cannot be emptied beforehand. This is addressed in agentvm mode by discarding any agents using that scratch directory and starting new agents using a new empty scratch directory. Successive directives use suffices _1, _2, etc. If you see such directories appearing in the work directory, that is an indication that files were left open in the preceding directory in the series. Locking support Some tests use shared system resources such as fixed port numbers. This causes a problem when running tests concurrently. So, you can now mark a directory such that all the tests within all such directories will be run sequentially, even if you use -concurrency:N on the command line to run the rest of the tests in parallel. This is seen as a short term solution: it is recommended that tests not use shared system resources whenever possible. If you are running multiple instances of jtreg on the same machine at the same time, you can use a new option -lock:file to specify a file to be used for file locking; otherwise, the locking will just be within the JVM used to run jtreg. "autovm mode" By default, if no options to the contrary are given on the command line, tests will be run in othervm mode. Now, a test suite can be marked so that the default execution mode is "agentvm" mode. In conjunction with this, you can now mark a directory such that all the tests within that directory will be run in "othervm" mode. Conceptually, this is equivalent to putting /othervm on every appropriate action on every test in that directory and any subdirectories. This is seen as a short term solution: it is recommended tests be adapted to use agentvm mode, or use "@run main/othervm" explicitly. Info in test result files The user name and jtreg version info are now stored in the properties near the beginning of the .jtr file. Build The makefiles used to build and test jtreg have been reorganized and simplified. jtreg is now using JT Harness version 4.4. Other jtreg provides access to GNOME_DESKTOP_SESSION_ID when set. jtreg ensures that shell tests are given an absolute path for the JDK under test. jtreg now honors the "first sentence rule" for the description given by @summary. jtreg saves the default locale before executing a test in samevm or agentvm mode, and restores it afterwards. Bug fixes jtreg tried to execute a test even if the compilation failed in agentvm mode because of a JVM crash. jtreg did not correctly handle the -compilejdk option. Acknowledgements Thanks to Alan, Amy, Andrey, Brad, Christine, Dima, Max, Mike, Sherman, Steve and others for their help, suggestions, bug reports and for testing this latest version.

    Read the article

  • Unit Testing Framework for XQuery

    - by Knut Vatsendvik
    This posting provides a unit testing framework for XQuery using Oracle Service Bus. It allows you to write a test case to run your XQuery transformations in an automated fashion. When the test case is run, the framework returns any differences found in the response. The complete code sample with install instructions can be downloaded from here. Writing a Unit Test You start a new Test Case by creating a Proxy Service from Workshop that comes with Oracle Service Bus. In the General Configuration page select Service Type to be Messaging Service           In the Message Type Configuration page link both the Request & Response Message Type to the TestCase element of the UnitTest.xsd schema                 The TestCase element consists of the following child elements The ID and optional Name element is simply used for reference. The Transformation element is the XQuery resource to be executed. The Input elements represents the input to run the XQuery with. The Output element represents the expected output. These XML documents are “also” represented as an XQuery resource where the XQuery function takes no arguments and returns the XML document. Why not pass the test data with the TestCase? Passing an XML structure in another XML structure is not very easy or at least not very human readable. Therefore it was chosen to represent the test data as an loadable resource in the OSB. However you are free to go ahead with another approach on this if wanted. The XMLDiff elements represents any differences found. A sample on input is shown here. Modeling the Message Flow Then the next step is to model the message flow of the Proxy Service. In the Request Pipeline create a stage node that loads the test case input data.      For this, specify a dynamic XQuery expression that evaluates at runtime to the name of a pre-registered XQuery resource. The expression is of course set by the input data from the test case.           Add a Run stage node. Assign the result of the XQuery, that is to be run, to a context variable. Define a mapping for each of the input variables added in previous stage.     Add a Compare stage. Like with the input data, load the expected output data. Do a compare using XMLDiff XQuery provided where the first argument is the loaded output test data, and the second argument the result from the Run stage. Any differences found is replaced back into the test case XMLDiff element. In case of any unexpected failure while processing, add an Error Handler to the Pipeline to capture the fault. To pass back the result add the following Insert action In the Response Pipeline. A sample on output is shown here.

    Read the article

  • Need help in creating test appliaction in Java and passing parameters into a new designed Java API.

    - by Christophe
    Need help, Please!!! By following the protocol, the Request should be built in 5 byte length, including 1 byte for changing Braud rate (Speed), and send request to a RS-232 port. Protocol: --------------- Request for the command processing, with optional extra byte for changing Baud Rate: LGT : length message ( LGT = 5 ) TYPE : 0x06 TO(time out): 0x0000 CMD : (1 byte) 0x02 application update Baud Rate : (1 byte) 0xNN (optional parameter to change baud rate of the Mnt App) where NN can be: 0x00 = No Baud Rate Change (similar to 4-byte command above) 0x09 = Change to 9600 Baud for Application Update speed 0x0A = Change to 19200 Baud for Application Update speed 0x0E = Change to 115200 Baud for Application Update speed All other bytes are not accepted and will result in a status of 0x01. ------------------ I'm trying to test if my code works or not by creating another class (TestApplication.java) and pass the "3 differenr Baut rate" to this CPXAppliaction. the 3 Baud Rate is supposed to input by reading a file.txt. Question: How do you think these code (first half)? please don't warry about the details about the "sending part". I mean, do I need setter/getter for the "speed" parameter pass? I created the demo test class DemoApp.java (input speed by reading a txt file, and pass into CPXAppliaction). how do you think about that code? Many thanks to you guys!! public class CPXApplication extends CPXCommand { private int speed; . public CPXApplication() { speed = 9600; } public CPXApplication(int speedinit) { speed = speedinit; // TODO: where to get the speed? } protected void buildRequest() throws ElitePortException { String trans = ""; // build the full-qualified message following the protocol trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 5); trans = addToRequest(trans, (char) 6); trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 2); switch (speed) { case 9600: trans = addToRequest(trans, (char) 0x09); break; case 19200: trans = addToRequest(trans, (char) 0x0A); break; case 115200: trans = addToRequest(trans, (char) 0x0E); break; default: // TODO: unexpected baud rate. throw(); break; } trans = EncryptBinary(trans); trans = "F0." + trans; wrapRequest(trans); } protected String addToRequest(String req, char c) { return req + c; } protected String addToRequest(String req, String s) { return req + s; } protected String addToRequest(String req) { return req; } public void analyzeResponse() { //.............. } } Here is the demo test code: package com.ingenico.testApp; import com.ingenico.EliteFd.; import java.util.Scanner; import java.io.; class Run { public static void run() { CPXAppliactionUpdate input = new CpXApplicationUpdate(); int lineno = 0; try { FileReader fr = new FileReader("baudRateSpeed.txt"); BufferedReader reader = new BufferedReader(fr); String line = reader.readLine(); Scanner scan = null; while (line != null) { scan = new Scanner(line); String speed; speed = scan.next(); if (lineno == 0) { input.speed = speed; lineno++; } else { input = cpxapplicationupdate(speed, input); } line = reader.readLine(); } reader.close(); } catch (FileNotFoundException e) { System.out.println("Could not find the file"); } catch (IOException e) { System.out.println("Had a problem reading from file"); } } public class DemoApp{ public void main(String args[]) { run(); } } }

    Read the article

  • Need help in creating test application in Java and passing parameters into a new designed Java API.

    - by Christophe
    Need help, Please!!! By following the protocol, the Request should be built in 5 byte length, including 1 byte for changing Braud rate (Speed), and send request to a RS-232 port. Protocol: Request for the command processing, with optional extra byte for changing Baud Rate: LGT : length message ( LGT = 5 ) TYPE : 0x06 TO(time out): 0x0000 CMD : (1 byte) 0x02 application update Baud Rate : (1 byte) 0xNN (optional parameter to change baud rate of the Mnt App) where NN can be: 0x00 = No Baud Rate Change (similar to 4-byte command above) 0x09 = Change to 9600 Baud for Application Update speed 0x0A = Change to 19200 Baud for Application Update speed 0x0E = Change to 115200 Baud for Application Update speed All other bytes are not accepted and will result in a status of 0x01. I'm trying to test if my code works or not by creating another class (TestApplication.java) and pass the "3 differenr Baut rate" to this CPXAppliaction. the 3 Baud Rate is supposed to input by reading a file.txt. Question: How do you think these code (first half)? please don't warry about the details about the "sending part". I mean, do I need setter/getter for the "speed" parameter pass? I created the demo test class DemoApp.java (input speed by reading a txt file, and pass into CPXAppliaction). how do you think about that code? Many thanks to you guys!! public class CPXApplication extends CPXCommand { private int speed; . public CPXApplication() { speed = 9600; } public CPXApplication(int speedinit) { speed = speedinit; // TODO: where to get the speed? } protected void buildRequest() throws ElitePortException { String trans = ""; // build the full-qualified message following the protocol trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 5); trans = addToRequest(trans, (char) 6); trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 0); trans = addToRequest(trans, (char) 2); switch (speed) { case 9600: trans = addToRequest(trans, (char) 0x09); break; case 19200: trans = addToRequest(trans, (char) 0x0A); break; case 115200: trans = addToRequest(trans, (char) 0x0E); break; default: // TODO: unexpected baud rate. throw(); break; } trans = EncryptBinary(trans); trans = "F0." + trans; wrapRequest(trans); } protected String addToRequest(String req, char c) { return req + c; } protected String addToRequest(String req, String s) { return req + s; } protected String addToRequest(String req) { return req; } public void analyzeResponse() { //.............. } } Here is the demo test code: class Run { public static void run() { CPXAppliaction input = new CpXApplication(); int lineno = 0; try { FileReader fr = new FileReader("baudRateSpeed.txt"); BufferedReader reader = new BufferedReader(fr); String line = reader.readLine(); Scanner scan = null; while (line != null) { scan = new Scanner(line); String speed; speed = scan.next(); if (lineno == 0) { input.speed = speed; lineno++; } else { input = cpxapplication(speed, input); } line = reader.readLine(); } reader.close(); } catch (FileNotFoundException e) { System.out.println("Could not find the file"); } catch (IOException e) { System.out.println("Had a problem reading from file"); } } } public class DemoApp{ public void main(String args[]) { run(); }

    Read the article

  • git on HTTP with gitolite and nginx

    - by Arnaud
    I am trying to setup a server where my git repo would be accessible with HTTP(S). I am using gitolite and nginx (and gitlab for web interface but I doubt it makes any difference). I have searched the whole afternoon and I think I'm stuck. I have think I have understood that nginx needs fcgiwrap to work with gitolite, so I tried several configurations, but none of them work. My repositories are at /home/git/repositories. Here's the three nginx configurations I have tried. 1: location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 2: location ~ /git(/.*) { fastcgi_pass localhost:9001; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 3: location ~ ^.*\.git/objects/([0-9a-f]+/[0-9a-f]+|pack/pack-[0-9a-f]+.(pack|idx))$ { root /home/git/repositories/; } location ~ ^.*\.git/(HEAD|info/refs|objects/info/.*|git-(upload|receive)-pack)$ { root /home/git/repositories; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param PATH_INFO $uri; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; include /etc/nginx/fcgiwrap.conf; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/projectname.git/info/refs fatal: HTTP request failed and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Also note that with any of those configurations, when I try to clone with a project name that actually doesn't exist, I get a 502 error. Does anyone already succeeded in doing this? What am I doing wrong? Thanks. UPDATE: nginx error log file said: 2012/04/05 17:34:50 [crit] 21335#0: *50 connect() to unix:/var/run/fcgiwrap.socket failed (13: Permission denied) while connecting to upstream, client: 192.168.12.201, server: myservername, request: "GET /git/oct_editor.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" So I changed permissions for /var/run/fcgiwrap.socket, and now I have : > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 403 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Here is the error.log file I have now: 2012/04/05 17:36:52 [error] 21335#0: *78 FastCGI sent in stderr: "Cannot chdir to script directory (/usr/lib/git-core/git/projectname.git/info)" while reading response header from upstream, client: 192.168.12.201, server: myservername, request: "GET /git/projectname.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" I keep on investigating.

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • Sputnik – Google’s Java script conformance tester now as website

    - by samsudeen
    Sputnik the JavaScript 3 conformance test suite launched by Google last year is now available as Google Labs (Sputnik Test)product. You can browse it like any other  website and run over 5000 java script function tests to check your browser compatibility This product allows you do the following options Run :  You can run the complete test suite on your browser to check the compliance to  ECMA-262 standards.You can also browse trough  the failed test case. Compare : You can compare java script conformance of the various leading browsers in the market. Now web developers can be more cautious while designing websites to make them compatible with multiple browsers. Google is committed to review and release multiple version periodically with updated test cases. According to the latest sputnik test  result released by Google, Opera 10.5 leads the race with only 78 failures. Microsoft IE 8 performed the worst (463 failures) . Next to Opera,  Apple’s Safari (159 failures), Google’s Chrome (218 failures) and Mozilla’s Firefox (259 failures) leads respectively Though the test are about conformance, not performance, the top 3 leading browsers are among the last three in conformance which is something they have to improve Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • What is wrong with this solution? (Perm-Missing-Elem codility test)

    - by user2956907
    I have started playing with codility and came across this problem: A zero-indexed array A consisting of N different integers is given. The array contains integers in the range [1..(N + 1)], which means that exactly one element is missing. Your goal is to find that missing element. Write a function: int solution(int A[], int N); that, given a zero-indexed array A, returns the value of the missing element. For example, given array A such that: A[0] = 2 A[1] = 3 A[2] = 1 A[3] = 5 the function should return 4, as it is the missing element. Assume that: N is an integer within the range [0..100,000]; the elements of A are all distinct; each element of array A is an integer within the range [1..(N + 1)]. Complexity: expected worst-case time complexity is O(N); expected worst-case space complexity is O(1), beyond input storage (not counting the storage required for input arguments). I have submitted the following solution (in PHP): function solution($A) { $nr = count($A); $totalSum = (($nr+1)*($nr+2))/2; $arrSum = array_sum($A); return ($totalSum-$arrSum); } which gave me a score of 66 of 100, because it was failing the test involving large arrays: "large_range range sequence, length = ~100,000" with the result: RUNTIME ERROR tested program terminated unexpectedly stdout: Invalid result type, int expected. I tested locally with an array of 100.000 elements, and it worked without any problems. So, what seems to be the problem with my code and what kind of test cases did codility use to return "Invalid result type, int expected"?

    Read the article

  • Where are the factory_girl records?

    - by gmile
    I'm trying to perform an integration test via Watir and RSpec. So, I created a test file within /integration and wrote a test, which adds a test user into a base via factory_girl. The problem is — I can't actually perform a login with my test user. The test I wrote looks as following: ... before(:each) @user = Factory(:user) @browser = FireWatir::Firefox.new end it "should login" @browser.text_field(:id, "username").set(@user.username) @browser.text_field(:id, "password").set(@user.password) @browser.button(:id, "get_in").click end ... As I'm starting the test and see a "performance" in browser, it always fires up a Username is not valid error. I've started an investigation, and did a small trick. First of all I've started to have doubts if the factory actually creates the user in DB. So after the immediate call to factory I've put some puts User.find stuff only to discover that the user is actually in DB. Ok, but as user still couldn't have logged in I've decided to see if he's present in DB with my own eyes. I've added a sleep right after a factory call, and went to see what's in the DB at the moment. I was crushed to see that the user is actually missing there! How come? Still, when I'm trying to output a user within the code, he is actually being fetched from somewhere. So where does the records, made by factory_girl within a runtime lie? Is it test or dev DB? I don't get it. I've 10 times checked if I'm running my Mongrel in test mode (does it matter? I think it does, as I'm trying to tun an integration test) and if my database.yml holds the correct connection specific data. I'm using an authlogic, if that can give any clue (no, putting activate_authlogic doesn't work here).

    Read the article

  • How should one import large amounts of data for FIT/Fitnesse tests?

    - by Lachlan
    We have a scheduling engine with large amounts of test data to test all the scenarios, so test automation is critical. We're currently hoping to use FIT/Fitnesse. However a single test has quite a large table of test data, so it doesn't fit very well into the mould of "two or three inputs, one or more outputs" that Fitnesse uses in its examples. Hopefully the other functionality of Fitnesse makes it worth using it. I hear that there is a way to initialize an application for a FIT test with an Excel spreadsheet - not the Spreadsheet to Fitness function, mind you - but I haven't been able to find it so far. Once the whole spreadsheet is loaded into the application, and the application does its thing, we plan to compare either a number of output rows, or perhaps just the last row, to see if the test passes. The application is currently pulling test data from a database for manual tests, but writing to a database, then initializing from it, is not preferred because of the performance impact. The application is written in C#.

    Read the article

  • build .pyc prob

    - by Apache
    hi experts, i build .py as follow python /root/pyinstaller-1.4/Makespec.py test.py then python /root/pyinstaller-1.4/Build.py test.spec this working fine then i test to build with my .pyc as follow python /root/pyinstaller-1.4/Makespec.py test.pyc then python /root/pyinstaller-1.4/Build.pyc test.spec but its generating error as follow checking Analysis building because inputs changed running Analysis outAnalysis0.toc Analyzing: /root/pyinstaller-1.4/support/_mountzlib.py Analyzing: /root/pyinstaller-1.4/support/useUnicode.py Analyzing: test.pyc Traceback (most recent call last): File "/root/pyinstaller-1.4/Build.py", line 1160, in main(args[0], configfilename=opts.configfile) File "/root/pyinstaller-1.4/Build.py", line 1148, in main build(specfile) File "/root/pyinstaller-1.4/Build.py", line 1111, in build execfile(spec) File "test.spec", line 3, in pathex=['/root/test']) File "/root/pyinstaller-1.4/Build.py", line 245, in init self.postinit() File "/root/pyinstaller-1.4/Build.py", line 196, in postinit self.assemble() File "/root/pyinstaller-1.4/Build.py", line 314, in assemble analyzer.analyze_script(script) File "/root/pyinstaller-1.4/mf.py", line 559, in analyze_script co = compile(string.replace(stuff, "\r\n", "\n"), fnm, 'exec') TypeError: compile() expected string without null bytes why this error occur, cannot we build using .pyc, or there is other way to build it,

    Read the article

  • Advantage Database Server: slow stored procedure performance.

    - by ie
    I have a question about a performance of stored procedures in the ADS. I created a simple database with the following structure: CREATE TABLE MainTable ( Id INTEGER PRIMARY KEY, Name VARCHAR(50), Value INTEGER ); CREATE UNIQUE INDEX MainTableName_UIX ON MainTable ( Name ); CREATE TABLE SubTable ( Id INTEGER PRIMARY KEY, MainId INTEGER, Name VARCHAR(50), Value INTEGER ); CREATE INDEX SubTableMainId_UIX ON SubTable ( MainId ); CREATE UNIQUE INDEX SubTableName_UIX ON SubTable ( Name ); CREATE PROCEDURE CreateItems ( MainName VARCHAR ( 20 ), SubName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER, MainId INTEGER OUTPUT, SubId INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @SubName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainName = (SELECT MainName FROM __input); @SubName = (SELECT SubName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, @MainName, @MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, @SubName, @MainId, @SubValue); INSERT INTO __output SELECT @MainId, @SubId FROM system.iota; END; CREATE PROCEDURE UpdateItems ( MainName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); UPDATE MainTable SET Value = @MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = @SubValue WHERE MainId = @MainId; END; CREATE PROCEDURE SelectItems ( MainName VARCHAR ( 20 ), CalculatedValue INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); @MainName = (SELECT MainName FROM __input); INSERT INTO __output SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = @MainName; END; CREATE PROCEDURE DeleteItems ( MainName VARCHAR ( 20 ) ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId; END; Actually, the problem I had - even so light stored procedures work very-very slow (about 50-150 ms) relatively to plain queries (0-5ms). To test the performance, I created a simple test (in F# using ADS ADO.NET provider): open System; open System.Data; open System.Diagnostics; open Advantage.Data.Provider; let mainName = "main name #"; let subName = "sub name #"; // INSERT let cmdTextScriptInsert = " DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, :MainName, :MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, :SubName, @MainId, :SubValue); SELECT @MainId, @SubId FROM system.iota;"; let cmdTextProcedureInsert = "CreateItems"; // UPDATE let cmdTextScriptUpdate = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); UPDATE MainTable SET Value = :MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = :SubValue WHERE MainId = @MainId;"; let cmdTextProcedureUpdate = "UpdateItems"; // SELECT let cmdTextScriptSelect = " SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = :MainName;"; let cmdTextProcedureSelect = "SelectItems"; // DELETE let cmdTextScriptDelete = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId;"; let cmdTextProcedureDelete = "DeleteItems"; let cnnStr = @"data source=D:\DB\test.add; ServerType=local; user id=adssys; password=***;"; let cnn = new AdsConnection(cnnStr); try cnn.Open(); let cmd = cnn.CreateCommand(); let parametrize ix prms = cmd.Parameters.Clear(); let addParam = function | "MainName" -> cmd.Parameters.Add(":MainName" , mainName + ix.ToString()) |> ignore; | "SubName" -> cmd.Parameters.Add(":SubName" , subName + ix.ToString() ) |> ignore; | "MainValue" -> cmd.Parameters.Add(":MainValue", ix * 3 ) |> ignore; | "SubValue" -> cmd.Parameters.Add(":SubValue" , ix * 7 ) |> ignore; | _ -> () prms |> List.iter addParam; let runTest testData = let (cmdType, cmdName, cmdText, cmdParams) = testData; let toPrefix cmdType cmdName = let prefix = match cmdType with | CommandType.StoredProcedure -> "Procedure-" | CommandType.Text -> "Script -" | _ -> "Unknown -" in prefix + cmdName; let stopWatch = new Stopwatch(); let runStep ix prms = parametrize ix prms; stopWatch.Start(); cmd.ExecuteNonQuery() |> ignore; stopWatch.Stop(); cmd.CommandText <- cmdText; cmd.CommandType <- cmdType; let startId = 1500; let count = 10; for id in startId .. startId+count do runStep id cmdParams; let elapsed = stopWatch.Elapsed; Console.WriteLine("Test '{0}' - total: {1}; per call: {2}ms", toPrefix cmdType cmdName, elapsed, Convert.ToInt32(elapsed.TotalMilliseconds)/count); let lst = [ (CommandType.Text, "Insert", cmdTextScriptInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.Text, "Update", cmdTextScriptUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.Text, "Select", cmdTextScriptSelect, ["MainName"]); (CommandType.Text, "Delete", cmdTextScriptDelete, ["MainName"]) (CommandType.StoredProcedure, "Insert", cmdTextProcedureInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Update", cmdTextProcedureUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Select", cmdTextProcedureSelect, ["MainName"]); (CommandType.StoredProcedure, "Delete", cmdTextProcedureDelete, ["MainName"])]; lst |> List.iter runTest; finally cnn.Close(); And I'm getting the following results: Test 'Script -Insert' - total: 00:00:00.0292841; per call: 2ms Test 'Script -Update' - total: 00:00:00.0056296; per call: 0ms Test 'Script -Select' - total: 00:00:00.0051738; per call: 0ms Test 'Script -Delete' - total: 00:00:00.0059258; per call: 0ms Test 'Procedure-Insert' - total: 00:00:01.2567146; per call: 125ms Test 'Procedure-Update' - total: 00:00:00.7442440; per call: 74ms Test 'Procedure-Select' - total: 00:00:00.5120446; per call: 51ms Test 'Procedure-Delete' - total: 00:00:01.0619165; per call: 106ms The situation with the remote server is much better, but still a great gap between plaqin queries and stored procedures: Test 'Script -Insert' - total: 00:00:00.0709299; per call: 7ms Test 'Script -Update' - total: 00:00:00.0161777; per call: 1ms Test 'Script -Select' - total: 00:00:00.0258113; per call: 2ms Test 'Script -Delete' - total: 00:00:00.0166242; per call: 1ms Test 'Procedure-Insert' - total: 00:00:00.5116138; per call: 51ms Test 'Procedure-Update' - total: 00:00:00.3802251; per call: 38ms Test 'Procedure-Select' - total: 00:00:00.1241245; per call: 12ms Test 'Procedure-Delete' - total: 00:00:00.4336334; per call: 43ms Is it any chance to improve the SP performance? Please advice. ADO.NET driver version - 9.10.2.9 Server version - 9.10.0.9 (ANSI - GERMAN, OEM - GERMAN) Thanks!

    Read the article

  • Does ActiveRecord make Ruby on Rails code hard to test?

    - by Erik Öjebo
    I've spent most of my time in statically typed languages (primarily C#). I have some bad experiences with the Active Record pattern and unit testing, because of the static methods and the mix of entities and data access code. Since the Ruby community probably is the most test driven of the communities out there, and the Rails ActiveRecord seems popular, there must be some way of combining TDD and ActiveRecord based code in Ruby on Rails. I would guess that the problem goes away in dynamic languages, somehow, but I don't see how. So, what's the trick?

    Read the article

  • How to test a class that makes HTTP request and parse the response data in Obj-C?

    - by GuidoMB
    I Have a Class that needs to make an HTTP request to a server in order to get some information. For example: - (NSUInteger)newsCount { NSHTTPURLResponse *response; NSError *error; NSURLRequest *request = ISKBuildRequestWithURL(ISKDesktopURL, ISKGet, cookie, nil, nil); NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; if (!data) { NSLog(@"The user's(%@) news count could not be obtained:%@", username, [error description]); return 0; } NSString *regExp = @"Usted tiene ([0-9]*) noticias? no leídas?"; NSString *stringData = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSArray *match = [stringData captureComponentsMatchedByRegex:regExp]; [stringData release]; if ([match count] < 2) return 0; return [[match objectAtIndex:1] intValue]; } The things is that I'm unit testing (using OCUnit) the hole framework but the problem is that I need to simulate/fake what the NSURLConnection is responding in order to test different scenarios and because I can't relay on the server to test my framework. So the question is Which is the best ways to do this?

    Read the article

  • How to build an android test app with a dependency on another app using ant?

    - by Mike
    I have a module called MyApp, and another module called MyAppTests which has a dependency on MyApp. Both modules produce APKs, one named MyApp.apk and the other MyAppTests.apk. I normally build these in IntelliJ or Eclipse, but I'd like to create an ant buildfile for them for the purpose of continuous integration. I used "android update" to create a buildfile for MyApp, and thanks to commonsware's answer to my previous question I've been able to build it successfully using ant. I'd now like to build MyAppTests.apk using ant. I constructed the buildfile as before using "android update", but when I run it I get an error indicating that it's not finding any of the classes in MyApp. Taking a que from my previous question, I tried putting MyApp.apk into my MyAppTests/libs, but unfortunately that didn't miraculously solve the problem. What's the best way to build a test app APK using ant when it depends on classes in another APK? $ ant debug Buildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.5 [setup] API level: 3 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. dirs: [echo] Creating output directories if needed... resource-src: [echo] Generating R.java / Manifest.java from the resources... aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:3: cannot find symbol [javac] symbol : class MyApplication [javac] location: package com.myapp [javac] import com.myapp.MyApplication; [javac] ^

    Read the article

  • Is there an easy way to get the Scala REPL to reload a class or package?

    - by Rex Kerr
    I almost always have a Scala REPL session or two open, which makes it very easy to give Java or Scala classes a quick test. But if I change a class and recompile it, the REPL continues with the old one loaded. Is there a way to get it to reload the class, rather than having to restart the REPL? Just to give a concrete example, suppose we have the file Test.scala: object Test { def hello = "Hello World" } We compile it and start the REPL: ~/pkg/scala-2.8.0.Beta1-prerelease$ bin/scala Welcome to Scala version 2.8.0.Beta1-prerelease (Java HotSpot(TM) Server VM, Java 1.6.0_16). Type in expressions to have them evaluated. Type :help for more information. scala> Test.hello res0: java.lang.String = Hello World Then we change the source file to object Test { def hello = "Hello World" def goodbye = "Goodbye, Cruel World" } but we can't use it: scala> Test.goodbye <console>:5: error: value goodbye is not a member of object Test Test.goodbye ^ scala> import Test; <console>:1: error: '.' expected but ';' found. import Test;

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >