Search Results

Search found 4765 results on 191 pages for 'gh unit'.

Page 68/191 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Failure to toubleshoot a juju charm deployment

    - by Bruno Pereira
    My environments.yaml looks like this: environments: test: type: local control-bucket: juju-a14dfae3830142d9ac23c499395c2785999 admin-secret: 6608267bbd6b447b8c90934167b2a294999 default-series: oneiric juju-origin: distro data-dir: /home/bruno/projects/juju juju bootstrap runs perfect: 2011-11-22 19:19:31,999 INFO Bootstrapping environment 'test' (type: local)... 2011-11-22 19:19:32,004 INFO Checking for required packages... 2011-11-22 19:19:33,584 INFO Starting networking... 2011-11-22 19:19:34,058 INFO Starting zookeeper... 2011-11-22 19:19:34,283 INFO Starting storage server... 2011-11-22 19:19:40,051 INFO Initializing zookeeper hierarchy 2011-11-22 19:19:40,247 INFO Starting machine agent (origin: distro)... [sudo] password for bruno: 2011-11-22 19:23:16,054 INFO Environment bootstrapped 2011-11-22 19:23:16,079 INFO 'bootstrap' command finished successfully Deploy from a known good charm is accepted (tried it with one that I am trying to create): juju deploy --repository=/home/bruno/projects/charms_repo/ local:teamspeak 2011-11-22 19:28:49,929 INFO Charm deployed as service: 'teamspeak' 2011-11-22 19:28:49,962 INFO 'deploy' command finished successfully After this I can see that juju debug-log shows activity and I can see the network indicator going on and off and activity on my hard-disk. Wait... Looking at juju status I get: services: teamspeak: charm: local:oneiric/teamspeak-1 relations: {} units: teamspeak/0: machine: 0 public-address: 192.168.122.226 relations: {} state: start_error juju debug-log does not help and I have no files under /var/log/juju or /var/lib/juju. Last juju debug-log only shows this: 2011-11-22 19:45:20,790 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['wordpress/0']) new:set(['wordpress/0', 'teamspeak/0']) 2011-11-22 19:45:20,823 Machine:0: juju.agents.machine DEBUG: Starting service unit: teamspeak/0 ... 2011-11-22 19:45:21,137 Machine:0: juju.agents.machine DEBUG: Downloading charm local:oneiric/teamspeak-1 to /home/bruno/projects/juju/bruno-test/charms 2011-11-22 19:45:22,115 Machine:0: juju.agents.machine DEBUG: Starting service unit teamspeak/0 2011-11-22 19:45:22,133 Machine:0: unit.deploy INFO: Creating container teamspeak-0... 2011-11-22 19:47:04,586 Machine:0: unit.deploy INFO: Container created for teamspeak/0 2011-11-22 19:47:04,781 Machine:0: unit.deploy DEBUG: Charm extracted into container 2011-11-22 19:47:04,801 Machine:0: unit.deploy DEBUG: Starting container... 2011-11-22 19:47:07,086 Machine:0: unit.deploy INFO: Started container for teamspeak/0 2011-11-22 19:47:07,107 Machine:0: juju.agents.machine INFO: Started service unit teamspeak/0 How can I troubleshot what is happening here?

    Read the article

  • Requiring a specific order of compilaiton

    - by Aber Kled
    When designing a compiled programming language, is it a bad idea to require a specific order of compilation of separate units, according to their dependencies? To illustrate what I mean, consider C. C is the opposite of what I'm suggesting. There are multiple .c files, that can all depend on each other, but all of these separate units can be compiled on their own, in no particular order - only to be linked together into a final executable later. This is mostly due to header files. They enable separate units to share information with each other, and thus the units are able to be compiled independently. If a language were to dispose of header files, and only keep source and object files, then the only option would be to actually include the unit's meta-information in the unit's object file. However, this would mean that if the unit A depends on the unit B, then the unit B would need to be compiled before unit A, so unit A could "import" the unit B's object file, thus obtaining the information required for its compilation. Am I missing something here? Is this really the only way to go about removing header files in compiled languages?

    Read the article

  • Should developers be responsible for tests other than unit tests?

    - by Jackie
    I am currently working on a rather large project, and I have used JUnit and EasyMock to fairly extensively unit test functionality. I am now interested in what other types of testing I should worry about. As a developer is it my responsibility to worry about things like functional, or regression testing? Is there a good way to integrate these in a useable way in tools such as Maven/Ant/Gradle? Are these better suited for a Tester or BA? Are there other useful types of testing that I am missing?

    Read the article

  • Should developers be responsible for tests other than unit tests, if so which ones are the most common?

    - by Jackie
    I am currently working on a rather large project, and I have used JUnit and EasyMock to fairly extensively unit test functionality. I am now interested in what other types of testing I should worry about. As a developer is it my responsibility to worry about things like functional, or regression testing? Is there a good way to integrate these in a useable way in tools such as Maven/Ant/Gradle? Are these better suited for a Tester or BA? Are there other useful types of testing that I am missing?

    Read the article

  • How can I make Google show unit conversions by default?

    - by bUbUKid
    When I search for "4 inches in g" on my Windows Firefox I immediately get a unit conversion done by Google that shows up before the actual search results. On my Ubuntu 12.04 system this does not work though. I tried Firefox and Chromium and have no script blockers installed. I also switched off AdBlock Plus for testing but to no avail. I realize that this is not really Ubuntu doing something wrong but: Are there any settings I can modify to make Google show these results? I use them quite frequently and I believe (though I cannot test it anymore) that this used to work on my last Ubuntu System. Maybe there are some script sources that Ubuntu has disabled by default or something like that?

    Read the article

  • NUnit vs. MsTest: NUnit wins for Unit Testing.

    People are still wondering what are the differences between the two most popular unit testing frameworks in the .NET world: the open source NUnit and the commercial MsTest). Heres a short list of what i remember instantly: Nunit contains a [TestCase] attribute that allows implementing parametrized tests. this does not exist in msTest MsTest's ExpectedException attribute has a bug where the expected message is never really asserted even if it's wrong - the test will pass. Nunit has an...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I find a unit vector of another in Java?

    - by Shijima
    I'm writing a Java formula based on this tutorial: 2-D elastic collisions without Trigonometry. I am in the section "Elastic Collisions in 2 Dimensions". Part of step 1 says: Next, find the unit vector of n, which we will call un. This is done by dividing by the magnitude of n. My below code represents the normal vector of 2 objects (I'm using a simple array to represent the normal vector). int[] normal = new int[2]; normal[0] = ball2.x - ball1.x; normal[1] = ball2.y - ball1.y; I am unsure what the tutorial means by dividing the magnitude of n to get the un. What is un? How can I calculate it with my Java array?

    Read the article

  • Is it OK to introduce methods that are used only during unit tests?

    - by Mchl
    Recently I was TDDing a factory method. The method was to create either a plain object, or an object wrapped in a decorator. The decorated object could be of one of several types all extending StrategyClass. In my test I wanted to check, if the class of returned object is as expected. That's easy when plain object os returned, but what to do when it's wrapped within a decorator? I code in PHP so I could use ext/Reflection to find out a class of wrapped object, but it seemed to me to be overcomplicating things, and somewhat agains rules of TDD. Instead I decided to introduce getClassName() that would return object's class name when called from StrategyClass. When called from the decorator however, it would return the value returned by the same method in decorated object. Some code to make it more clear: interface StrategyInterface { public function getClassName(); } abstract class StrategyClass implements StrategyInterface { public function getClassName() { return \get_class($this); } } abstract class StrategyDecorator implements StrategyInterface { private $decorated; public function __construct(StrategyClass $decorated) { $this->decorated = $decorated; } public function getClassName() { return $this->decorated->getClassName(); } } And a PHPUnit test /** * @dataProvider providerForTestGetStrategy * @param array $arguments * @param string $expected */ public function testGetStrategy($arguments, $expected) { $this->assertEquals( __NAMESPACE__.'\\'.$expected, $this->object->getStrategy($arguments)->getClassName() ) } //below there's another test to check if proper decorator is being used My point here is: is it OK to introduce such methods, that have no other use than to make unit tests easier? Somehow it doesn't feel right to me.

    Read the article

  • how i can check if this header exists ?

    - by Night Walker
    I am trying to check in my xml file if HeaderReportUnit exists, how i can check if this Header exists ? I am using 2.0 assembly , really thanks for help <?xml version="1.0" encoding="UTF-8" ?> - <HeadReportUnit> - <Title> <ModuleNum>ModuleNum</ModuleNum> <hdstSetPos>hdstSetPos</hdstSetPos> <hdstNzlName>hdstNzlName</hdstNzlName> <nzavSpecName>nzavSpecName</nzavSpecName> <nzavNzlDiameter>nzavNzlDiameter</nzavNzlDiameter> <nzavNzlSizeX>nzavNzlSizeX</nzavNzlSizeX> <nzavNzlSizeY>nzavNzlSizeY</nzavNzlSizeY> <nzavNzlType2>nzavNzlType2</nzavNzlType2> </Title> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 1</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 2</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 3</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 4</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 5</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 6</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 7</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 8</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 9</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</nzavNzlSizeX> <nzavNzlSizeY>0.6</nzavNzlSizeY> <nzavNzlType2>Standard</nzavNzlType2> </Unit> - <Unit> <ModuleNum>1</ModuleNum> <hdstSetPos>1- 10</hdstSetPos> <hdstNzlName>R07-007-070</hdstNzlName> <nzavSpecName>AA05700</nzavSpecName> <nzavNzlDiameter>0.0</nzavNzlDiameter> <nzavNzlSizeX>0.7</ nzavNzlSizeX

    Read the article

  • HP DAT72x6 autoloader

    - by ericmayo
    Hoping someone here has seen this similar issue and can offer soem advise... I have an HP DAT72x6 auto loader tape backup unit. The external kind, here is a link to an owner's manual I found of it. http://www.dectrader.com/docs/set2/emr_na-c00070400-1.pdf I purchased the unit used about 6 months ago. The unit stopped working after 3-4 back-ups, it's used one day a month to do a monthly backup of another system. Suffice it to say the unit gets very little usage. There is an amber light on the front of the unit called the OAR (Operator Attention Required). The manual states to call for service when this light comes on and stays on. I've tried a few things to resolve but none are working. I've tried power cycling, re-securing the SCSI cables at both ends. Unit was used so I didn't pay much ($500) and so I don't want to spend a lot to have it fixed; might as well buy something new one if fixing this is going to cost more than $100-$150 bucks. I'm curious to see if anyone here has been around these devices or possibly is an HP repair person that can give me some things to try to resolve. The manual states that a solid amber OAR light indicates a hardware failure. When I power cycle the unit I see one of two scenarios so far. The unit powers up, shows self test in the LCD, then LCD changes to show all possible images and the OAR light comes on. The unit powers up, LCD is completely blank, the green lights go through some sort of process of going on and off and later the amber OAR light comes on and stays on. If it's a simple misalignment issue, I may be able to fix myself but not knowing what could cause the OAR light to come on gives me no where to even start. Google around gave no help either. I hoping someone here has experience with this and can help or point me in the right direction. Also, I don't have the HP Diagnostic tools mentioned in many manuals. The unit is connected to a Linux box. The 3-4 backups I've done with it so far have had no issues. We run amanda backup. Before this incident the unit was backing up and reading tapes fine. Thanks for any help or suggestions.

    Read the article

  • How can I debug Cisco Firewall ASA "Dispatch Unit" very high CPU utilisation from ASDM?

    - by Andy
    I have recently had my first firewall installed so I am very new to this whole situation. I am finding that Dispatch unit is becoming overloaded and it would appear to be the reason I get serious bouts of lag on my server. The firewall has had little configuration apart from me blocking all the ports in "Access Rules" and allowing only the ones the server needs and from where it needs them. I guess what I am after is assistance with locating the issues causing "Dispatch Unit" to take up all the CPU Regards --Edit-- With ASDM statistics I found that packets inbound (peak of 70-100k/sec from <1k/sec normal), traffic inbound (peak of 40-50kbits/sec from <1kbits/sec normal) and CPU all peak at the same time so I am pretty sure it is an attack of some sort but as a beginner with ASA I am not sure how to resolve

    Read the article

  • Hardware question re Hauppauge WinTV-HVR 900 HD video/tv capture unit used with laptop with HDMI output laptop HDMI connection

    - by Bill
    I've ordered a Hauppauge WinTV-HVR 900 HD video/tv capture unit. It is mainly for use with my HP desktop running Windows 7 Professional, but I will want to use it occasionally with my partner's HP laptop running Vista Home Premium. The latter has an HDMI output which works perfectly with my LG 42" LCD TV, enabling display of BBC iPlayer and other catchup services. Will the live or recorded HD signal from the WinTV-HVR 900 HD connected to the laptop's USB input be output on the laptop's HDMI socket as HD? Come to that, will SD content be output? The reason I ask is that I had a problem with a Pinnacle unit which displayed OK on the laptop's screen but not on the TV screen (which did display all the normal Windows material). I've tried the Hauppage website, but it doesn't even acknowledge the existence of the WinTV-HVR 900 HD!

    Read the article

  • Will Visual Studio 2010 only run 4.0 unit tests?

    - by Bjorn Bailleul
    I have different projects written in .NET 3.5 and some unit test projects to cover them. When converting my solution to be used in Visual Studio 2010 I keep all my projects in 3.5 but the unit tests are forced to 4.0? This way I cannot use them with my regular projects anymore. Resulting in this: Could not load file or assembly 'xxx.xxx.Core.UnitTest' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. So I can't unit test any project less then 4.0? Or am I doing something wrong here?

    Read the article

  • How to run unit tests with DSSS and GDC?

    - by Benoit Vidis
    I am very new to D and still battling trying to configure my toolchain. I am running Ubuntu Karmic and would like to use DSSS with GDC and Tango or TangoBos. Till now, I installed GDC from Ubuntu repositories, DSSS, Tango and TangoBos from these repositories and I can compile using dsss + gdc + tangobos. According to DSSS documentation, it should be possible to run the unit tests using $ dsss build --test but on my system, the --test argument is ignored. I have dsss last version (0.78) and its inline help does not include anything about unit tests. Running ldc --unittest works fine (though I do not know exactly which libray it picks up). Is there a way to run my unit tests using the same compiler & library than for compilation? If so, is there a way to automate the testing or will I have to run it module per module?

    Read the article

  • A consistent and simple group of IDE and tools for embedded code and unit test in C++ ?

    - by TridenT
    I’m starting a new firmware project in C++ for Texas Instrument C283xx and C6xxx targets. The unit tests will not run on the target, but will be compiled with gcc/gcov on a PC with windows (and run as well on PC) with simple metrics for tested code coverage. The whole project will be part of Cruise Control.NET for continuous integrations. My question is: what are the consistent IDE / framework / tools to work together? A/ One of the developers says CodeComposerStudio V3.1 for application and CodeBlocks + CxxUnit for the Unit tests. B/ I’m more attracted with CodeComposerStudio V4 for application, Eclipse CDT (well, as CCS V4) and CppUnit for unit test + MockCpp for mocks. I don’t want the best in class tools for each process, but a global, consistent and easy solution (or group of tools if you prefer).

    Read the article

  • c# error for index was outside the bounds of array

    - by iliailiaey
    i have written below code but i have the error:Index was outside the bounds of the array.i cant understand its reason.how can i correct the code for preventing the error?(in the code,i want to make an array byte of size 57600 from an array byte of size 38400) int q = 0; int nbytes = 57600; byte[] gh = new byte[38400]; byte[] byte8 = new byte[nbytes]; byte[] aa = { 0xf8, 0x07, 0XE0, 0X1F }; for (int y = 0; y < nbytes-3; y += 3) { if (q < 38400-3) { byte8[y] = (byte)(gh[q] & aa[1]); byte8[y + 1] = (byte)(((gh[q] & aa[1]) << 5) | ((gh[q + 1] & aa[2]) >> 3)); byte8[y + 2] = (byte)((gh[q + 1] & aa[3]) << 3); q += 2; } }

    Read the article

  • Building an HTML5 App with ASP.NET

    - by Stephen Walther
    I’m teaching several JavaScript and ASP.NET workshops over the next couple of months (thanks everyone!) and I thought it would be useful for my students to have a really easy to use JavaScript reference. I wanted a simple interactive JavaScript reference and I could not find one so I decided to put together one of my own. I decided to use the latest features of JavaScript, HTML5 and jQuery such as local storage, offline manifests, and jQuery templates. What could be more appropriate than building a JavaScript Reference with JavaScript? You can try out the application by visiting: http://Superexpert.com/JavaScriptReference Because the app takes advantage of several advanced features of HTML5, it won’t work with Internet Explorer 6 (but really, you should stop using that browser). I have tested it with IE 8, Chrome 8, Firefox 3.6, and Safari 5. You can download the source for the JavaScript Reference application at the end of this article. Superexpert JavaScript Reference Let me provide you with a brief walkthrough of the app. When you first open the application, you see the following lookup screen: As you type the name of something from the JavaScript language, matching results are displayed: You can click the details link for any entry to view details for an entry in a modal dialog: Alternatively, you can click on any of the tabs -- Objects, Functions, Properties, Statements, Operators, Comments, or Directives -- to filter results by type of syntax. For example, you might want to see a list of all JavaScript built-in objects: You can login to the application to make modification to the application: After you login, you can add, update, or delete entries in the reference database: HTML5 Local Storage The application takes advantage of HTML5 local storage to store all of the reference entries on the local browser. IE 8, Chrome 8, Firefox 3.6, and Safari 5 all support local storage. When you open the application for the first time, all of the reference entries are transferred to the browser. The data is stored persistently. Even if you shutdown your computer and return to the application many days later, the data does not need to be transferred again. Whenever you open the application, the app checks with the server to see if any of the entries have been updated on the server. If there have been updates, then only the updates are transferred to the browser and the updates are merged with the existing entries in local storage. After the reference database has been transferred to your browser once, only changes are transferred in the future. You get two benefits from using local storage. First, the application loads very fast and works very fast after the data has been loaded once. The application does not query the server whenever you filter or view entries. All of the data is persisted in the browser. Second, you can browse the JavaScript reference even when you are not connected to the Internet (when you are on the proverbial airplane). The JavaScript Reference works as an offline application for browsers that support offline applications (unfortunately, not IE). When using Google Chrome, you can easily view the contents of local storage by selecting Tools, Developer Tools (CTRL-SHIFT I) and selecting Storage, Local Storage: The JavaScript Reference app stores two items in local storage: entriesLastUpdated and entries. HTML5 Offline App For browsers that support HTML5 offline applications – Chrome 8 and Firefox 3.6 but not Internet Explorer – you do not need to be connected to the Internet to use the JavaScript Reference. The JavaScript Reference can execute entirely on your machine just like any other desktop application. When you first open the application with Firefox, you are presented with the following warning: Notice the notification bar that asks whether you want to accept offline content. If you click the Allow button then all of the files (generated ASPX, images, CSS, JavaScript) needed for the JavaScript Reference will be stored on your local computer. Automatic Script Minification and Combination All of the custom JavaScript files are combined and minified automatically whenever the application is built with Visual Studio. All of the custom scripts are contained in a folder named App_Scripts: When you perform a build, the combine.js and combine.debug.js files are generated. The Combine.config file contains the list of files that should be combined (importantly, it specifies the order in which the files should be combined). Here’s the contents of the Combine.config file:   <?xml version="1.0"?> <combine> <scripts> <file path="compat.js" /> <file path="storage.js" /> <file path="serverData.js" /> <file path="entriesHelper.js" /> <file path="authentication.js" /> <file path="default.js" /> </scripts> </combine>   jQuery and jQuery UI The JavaScript Reference application takes heavy advantage of jQuery and jQuery UI. In particular, the application uses jQuery templates to format and display the reference entries. Each of the separate templates is stored in a separate ASP.NET user control in a folder named Templates: The contents of the user controls (and therefore the templates) are combined in the default.aspx page: <!-- Templates --> <user:EntryTemplate runat="server" /> <user:EntryDetailsTemplate runat="server" /> <user:BrowsersTemplate runat="server" /> <user:EditEntryTemplate runat="server" /> <user:EntryDetailsCloudTemplate runat="server" /> When the default.aspx page is requested, all of the templates are retrieved in a single page. WCF Data Services The JavaScript Reference application uses WCF Data Services to retrieve and modify database data. The application exposes a server-side WCF Data Service named EntryService.svc that supports querying, adding, updating, and deleting entries. jQuery Ajax calls are made against the WCF Data Service to perform the database operations from the browser. The OData protocol makes this easy. Authentication is handled on the server with a ChangeInterceptor. Only authenticated users are allowed to update the JavaScript Reference entry database. JavaScript Unit Tests In order to build the JavaScript Reference application, I depended on JavaScript unit tests. I needed the unit tests, in particular, to write the JavaScript merge functions which merge entry change sets from the server with existing entries in browser local storage. In order for unit tests to be useful, they need to run fast. I ran my unit tests after each build. For this reason, I did not want to run the unit tests within the context of a browser. Instead, I ran the unit tests using server-side JavaScript (the Microsoft Script Control). The source code that you can download at the end of this blog entry includes a project named JavaScriptReference.UnitTests that contains all of the JavaScripts unit tests. JavaScript Integration Tests Because not every feature of an application can be tested by unit tests, the JavaScript Reference application also includes integration tests. I wrote the integration tests using Selenium RC in combination with ASP.NET Unit Tests. The Selenium tests run against all of the target browsers for the JavaScript Reference application: IE 8, Chrome 8, Firefox 3.6, and Safari 5. For example, here is the Selenium test that checks whether authenticating with a valid user name and password correctly switches the application to Admin Mode: [TestMethod] [HostType("ASP.NET")] [UrlToTest("http://localhost:26303/JavaScriptReference")] [AspNetDevelopmentServerHost(@"C:\Users\Stephen\Documents\Repos\JavaScriptReference\JavaScriptReference\JavaScriptReference", "/JavaScriptReference")] public void TestValidLogin() { // Run test for each controller foreach (var controller in this.Controllers) { var selenium = controller.Value; var browserName = controller.Key; // Open reference page. selenium.Open("http://localhost:26303/JavaScriptReference/default.aspx"); // Click login button displays login form selenium.Click("btnLogin"); Assert.IsTrue(selenium.IsVisible("loginForm"), "Login form appears after clicking btnLogin"); // Enter user name and password selenium.Type("userName", "Admin"); selenium.Type("password", "secret"); selenium.Click("btnDoLogin"); // Should set adminMode == true selenium.WaitForCondition("selenium.browserbot.getCurrentWindow().adminMode==true", "30000"); } }   The results for running the Selenium tests appear in the Test Results window just like the unit tests: The Selenium tests take much longer to execute than the unit tests. However, they provide test coverage for actual browsers. Furthermore, if you are using Visual Studio ALM, you can run the tests automatically every night as part of your standard nightly build. You can view the Selenium tests by opening the JavaScriptReference.QATests project. Summary I plan to write more detailed blog entries about this application over the next week. I want to discuss each of the features – HTML5 local storage, HTML5 offline apps, jQuery templates, automatic script combining and minification, JavaScript unit tests, Selenium tests -- in more detail. You can download the source control for the JavaScript Reference Application by clicking the following link: Download You need Visual Studio 2010 and ASP.NET 4 to build the application. Before running the JavaScript unit tests, install the Microsoft Script Control. Before running the Selenium tests, start the Selenium server by running the StartSeleniumServer.bat file located in the JavaScriptReference.QATests project.

    Read the article

  • Combining pathfinding with global AI objectives

    - by V_Programmer
    I'm making a turn-based strategy game using Java and LibGDX. Now I want to code the AI. I haven't written the AI code yet. I've simply designed it. The AI will have two components, one focused in tactics and resource management (create troops, determine who have strategical advantage, detect important objectives, etc) and a individual component, focused in assign the work to each unit, examine its possibilites and move the unit. Now I'm facing an important problem. The map where the action take place is a grid-based map. Each terrain has different movement cost. I read about pathfinding and I think A* is a very good option to determine a good route between two points. However, imagine I have an unit with movement = 5 (i.e, it can move 5 tiles of movement cost = 1). My tactical AI has found an objective at a distance d = 20 tiles (Manhattan distance) from my unit. My problem is the following: the unit won't be able to reach the objective in one turn. So the AI will have to store a list of position and execute them in various turns. I don't know how to solve this. PS. In my unit code, I have a list called "selectionMarks" which stores all the possible places where the unit can go in this turn. This places are calculed recursively using a "getSelectionMarks" function. Any help is appreciated :D

    Read the article

  • How to start and stop a systemd unit with another?

    - by Andy Shinn
    I am using CoreOS to schedule systemd units with fleet. I have two units (firehose.service and firehose-announce.service. I am trying to get the firehose-announce.service to start and stop along with the firehose.service. Here is the unit file for firehose-announce.service: [Unit] Description=Firehose etcd announcer BindsTo=firehose@%i.service After=firehose@%i.service Requires=firehose@%i.service [Service] EnvironmentFile=/etc/environment TimeoutStartSec=30s ExecStartPre=/bin/sh -c 'sleep 1' ExecStart=/bin/sh -c "port=$(docker inspect -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' firehose-%i); echo -n \"Adding socket $COREOS_PRIVATE_IPV4:$port/tcp to /firehose/upstream/firehose-%i\"; while netstat -lnt | grep :$port >/dev/null; do etcdctl set /firehose/upstream/firehose-%i $COREOS_PRIVATE_IPV4:$port --ttl 300 >/dev/null; sleep 200; done" RestartSec=30s Restart=on-failure [X-Fleet] X-ConditionMachineOf=firehose@%i.service I am trying to use BindsTo with the notion that start and stop of firehose.service will also start or stop firehose-announce.service. But this never happens correctly. If firehose.service is stopped, then firehose-announce.service goes to failed state. But when I start firehose.service, the firehose-announce.service doesn't start up. What am I doing wrong here?

    Read the article

  • JPA Entity (in multiple persistence-unit) in OSGi (Spring DM) Environnement is confusing me.

    - by Vincent Demeester
    Hi, I'm a bit confused about a strange behavior of my JPA's related objects. I have three bundle : The User bundle does contain some user-related objects, but mainly the User object. The Energy bundle does contain some energy-related objects, and particularly a ConsumptionTerminal which contains a List of User. The Index bundle does contain an Index object that has no dependency at all. My OSGi environment is the following : A DataSource bundle that provide 2 services : dataSource and jpaVendorAdapter. The three bundles. They consume dataSource and jpaVendorAdapter. Their module-context.xml file look like : And they all have a persistence.xml file : User <?xml version="1.0" encoding="UTF-8"?> <persistence> <persistence-unit name="securityPU" transaction-type="JTA"> <jta-data-source>java:/securityDataSourceService</jta-data-source> <class>net.nextep.amundsen.security.domain.User</class> <!-- [...] --> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="INFO" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.orm.throw.exceptions" value="true" /> </properties> </persistence-unit> </persistence> Energy <?xml version="1.0" encoding="UTF-8"?> <persistence> <persistence-unit name="energyPU" transaction-type="JTA"> <jta-data-source>java:/securityDataSourceService</jta-data-source> <class>net.nextep.amundsen.security.domain.User</class> <class>net.nextep.amundsen.energy.domain.User</class> <!-- [...] --> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="INFO" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.orm.throw.exceptions" value="true" /> </properties> </persistence-unit> </persistence> Index : This one has the most simple persistence.xml with just the Index class (no shared Class). I'm using named @PersistenceUnit annotation like @PersitenceUnit(name = 'securityPU') (for the User bundle). And finally, I'm using EclipseLink as Jpa provider and Spring DM (+ Spring DM Server in the development process) The problem is the following : When the User bundle is deployed, I'm able to persist User objects. When the User bundle and Energy bundles are both deployed, I'm not able to persist User objects (neither the Energy object). But I don't have any exception at all ! There is no problem at all with the Index bundle. The bug is dataSource independent (I tried with PostgreSQL and MySQL so far). My first conclusion was that the <class>net.nextep.amundsen.security.domain.User</class> in both persistence unit was causing the trouble. I tried without it (and hiding the User dependent object in the Energy bundle) but it failed too. I'm a bit confused about that bug. I'm also not quite sure about the transaction management in this context. I wasn't the one who designed this architecture (but I tell my intern OK without testing it.. shame on me) but if I could understand this bug and maybe fix it without rewrite the bundle (and break my intern work), I would appreciate. Am I doing something wrong ? (it's obvious, but what..) Did I miss something while reading documentation ? By the way, I'm also looking for some best practices or advices when it comes to JPA, EclipseLink (or whatever JPA Provider) and Spring DM (and OSGi in general). I found interesting slides from Mike Keith about this topic (by browsing Stackoverflow).

    Read the article

  • Making a clone of Starcraft legal?

    - by user782220
    My question is similar to a previous question. Consider the following clone of Starcraft: Change the artwork, sound, music, change the names of units. However, leave the unit hit points unchanged, unit damage unchanged, unit movement speed unchanged, change ability names but not ability effects. Is that considered illegal? In other words, is copying the unit hit points, damage, etc. considered illegal even if everything else is changed?

    Read the article

  • Making a clone of a game legal?

    - by user782220
    My question is similar to a previous question. Consider the following clone of starcraft. Change the artwork, sound, music, change the names of units. However, leave the unit hitpoints unchanged, unit damage unchanged, unit movement speed unchanged, change ability names but not ability effects. Is that considered illegal? In other words is copying the unit hp, dmg, etc. considered illegal even if everything else is changed.

    Read the article

  • Changing Your Design for Testability

    Sometimes I come across a way of putting something that it is pithy good, not Hallmark trite, but an impactful and concise way of clarifying a previously obscure concept. A recent one of these happy occurrences was when I was reading the excellent Art of Unit Testing by Roy Osherove. After going through the basics of why youd want to test code and how to do it, Roy confronts a frequent objection to having unit tests, that it ends up changing how you design your components: When we write unit tests for our code, we are adding another end user (the test) to the object model. That end user is just as important as the original one, but it has different goals when using the model.  The test has specific requirements from the object model that seem to defy the basic logic behind a couple of object-oriented principles, mainly encapsulation. [emphasis added by me] When I read this, something clicked for me. I used to find it persuasive that because unit tests caused you to change your design they were more disruptive than they were worth. The counter argument I heard is that the disruption was OK, because testable design was just obviously better. That argument was not convincing as it seemed like delusional arrogance to suggest that any one of type of design was just inherently better for the particular applications I was building. What was missing was that I was not thinking of unit tests as an additional and equal end user to my design. If I accepted that proposition, than it was indeed obvious that a testable design was better because now all users of my component would be satisfied. Have I accepted that proposition? Id phrase it slightly different. I find more and more that having unit tests helps me write better, less buggy code before it gets to production or QA. As I write more unit tests, it gets easier to see how to create testable components, so I dont feel like its taking me as much extra time up front. I pick and choose components that seem most likely to benefit from automated tests and it is working out nicely. If you already implement Test Driven Development, this whole post was probably a waste of your time <g> If you hate the idea of unit tests, well, probably not a great value prop for you either. However, if you are somewhere in between, at least take a minute and check out a sample chapter from Roys book at: http://www.manning.com/osherove/.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >