Search Results

Search found 30361 results on 1215 pages for 'tactical test team'.

Page 47/1215 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Team Foundation Server 2010 and Offline development?

    - by Bobby Ortiz
    Did Microsoft add anything to improve offline development? I'm comparing TFS with Mercurial. Edit #1: Work Environment Details 20 Developers. 1 location. TFS 2005 is already installed, but only being used by 4 developers. Those that use TFS, are only using it for Source Control Others using VSS. :( Many small projects (Over 50 projects active) Project Team size: 1 to 3 Several employees work from home one day a week, but have VPN access There is a group of our devs that have never used TFS that are still on VSS. They are the ones pushing use to jump ship to Mercurial. Mercurial offline features is one reason they prefer it. Another reason is they just associate TFS with VSS regardless of my assertions to the contrary. We do use FogBugz and everyone agrees that it is great! This kind of excited our love for NON Microsoft products that our MUCH lighter. I don't think it is worth it.

    Read the article

  • Apache internal test servers

    - by user74274
    I want to build a test system in the office. On one box I will install XAMPP. I will put a website that I want to test on that box. Now I do not want it connected to the Internet. So my plan is to use a reverse proxy to resolve things like ads and other external links. So I am thinking of using mod_proxy for that. Now my question is, How many boxes do I need? 1 for Xampp, 1 for mod_proxy and 1 for the server where mod proxy redirects them to. Total of three. But maybe I could do it with less. Can I run Mod_proxy on the first box? Is there a better way? Thanks

    Read the article

  • How to create Isolated test environment

    - by Safin09
    Hi All, I am very new to the VMware world. We have VMWare vCenter 4 in the production environment but we have created multiple VLAN through Cisco Switch. I want know, how I can create an isolated test environment for software testing purpose only, so anything will happen in that test vLan will not make trouble in the production environment. "Host-only networking" is the solution or there is a better way to achieve this result? My requirements A. Hosts should be able to access Internet and a Network Share drive but not Production network B. Hosts should connect each other inside the Virtual LAN C. I should be able to take automatic or periodic backup or snapshoot and deploy snapshot when necessary. Whatever your answer is, please give me steps, how to do, if possible. If I need to purchase anything, I am ready to do but I don't want to spend big money. Many thanks in advance.

    Read the article

  • How to test TempDB performance?

    - by Matt Penner
    I'm getting some conflicting advice on how to best configure our SQL storage with our current SAN. I would like to do some of my own performance testing with a few different configurations. I looked at using SQLIOSim but it doesn't seem to simulate TempDB. Can anyone recommend a way to test data, log and TempDB performance? What about using a SQL profiler trace file from our production system? How would I use This to run against my test server? Thanks, Matt

    Read the article

  • nginx redirect TLD to TLD with virtual folder (example.com => example.com/test)

    - by Amund
    Im running nginx and in the config file I need to always have the domain example.com redirect to example.com/test. I tried various methods for achieving this but I always got a redirect error. What is the correct way to do this? nginx.conf snippet: server { server_name example.com www.example.com; location / { rewrite ^.+ /test permanent; } } server { listen 80; server_name www.example.com example.com; location / { root /var/www/apps/example/current/public; passenger_enabled on; rails_env production; } } Thanks!

    Read the article

  • Test/Dummy SMTP server for Windows

    - by geoaxis
    I would like to install a Test/Dummy SMTP server on a Windows 2008 server (virtual box). I just want to test my web application on the machine it self so I don't need the mails to go out on the internet, but just to be written to disk (so that I can verify that the mail function was indeed called and the correct data was handed over to SMTP) Can you recommend some tool. I guess starting your own SMTP server in python is an option. I am looking for a simple (ready to use) solution, targeted for tests systems. I will need to integrate it to automated tests (Selenium) at a later stage. Thanks

    Read the article

  • Process of carrying out a BER Test

    - by data
    I am subscribed to an ISP supplying a 3meg ADSL line. Lately (for the last 4 weeks) speeds have dropped from the usual average downstream speed of ~250kbps to just 0.14Mbps (according to speedtest.net) and employees are complaining about lack of access to the server. I have been calling customer support and logging calls for the last 3 weeks, but they have been unable to determine the source of the problem other than carrying out a few bitstream tests and checking the DHCP renewal times. I am going to call back and suggest carrying out a BER test. What type of equipment is needed to carry out this test? I have access to a wide range of Cisco networking equipment. Other: We don't need a leased line as there are less than ten employees.

    Read the article

  • Proccess of carrying out a BER Test

    - by data
    I am subscribed to an ISP supplying a 3meg ADSL line. Lately (for the last 4 weeks) speeds have dropped from the usual average downstream speed of ~250kbps to just 0.14Mbps (according to speedtest.net) and employees are complaining about lack of access to the server. I have been calling customer support and logging calls for the last 3 weeks, but they have been unable to determine the source of the problem other carrying out a few bitstream tests and checking the DHCP renewal times. I am going to call back and suggest carrying out a BER test. What type of equipment is needed to carry out this test? I have access to a wide range of Cisco networking equipment. Other: We dont need a leased line as there are less than ten employees.

    Read the article

  • -w test on OS X gives command not found error

    - by RobV
    I'm writing a bash script which I'm testing on OS X though it will ultimately run on a standard Linux environment and running into a weird error. I have tests like this in my script: if [ ! -w $BP ]; then echo "'$1' not writable" exit 1 fi Which seems pretty sane to me and works fine under Linux but when trying to test on OS X I get the following error message: startSvr.sh: line 135: [: missing `]' startSvr.sh: line 135: -w: command not found So is this a case of OS X not supporting the -w test or is there some other reason this isn't working for me? e.g. environment

    Read the article

  • Couchdb failing test suite on Linux

    - by user52674
    Hi I've been trying to install CouchDB on my webfusion virtual server. I followed the latest instructions from the webfusion forum (see: http://forum.webfaction.com/viewtopic.php?id=2355 ) and it runs (just) Futon is very sluggish and I get 502 errors. Anyway when I run the test suite it fails on multiple tests. Webfaction support have been great but don't have erlang experience to interpret the error logs. Can anyone help me know what might be wrong? Test suite result: basics, all_docs, attachments, attachments_multipart, attachment_names, compact, config, conflicts, delayed_commits, design_docs, design_options all the errors are: Exception raised: {"error":"unknown","reason":"\u000d\u000a502 Bad Gateway\u000d\u000a\u000d\u000a<\h1502 Bad Gateway\u000d\u000a nginx\u000d\u000a\u000d\u000a\u000d\u000a"} except for 'compact; which also has: Assertion failed: xhr.responseText == "This is a base64 encoded text" Assertion failed: xhr.getResponseHeader("Content-Type") == "text/plain" I'm stumped. Anybody know what these indicate? AL

    Read the article

  • What's a worthwhile test for a new HD?

    - by Michael Kohne
    I work for a company that uses standard 2.5" SATA HD's in our product. We presently test them by running the Linux 'badblocks -w' command on them when we get them - but they are 160 gig drives, so that takes like 5 hours (we boot parted magic onto a PC to do the scan). We don't actually build that many systems at a time, so this doable, but seriously annoying. Is there any research or anecdotal evidence on what a good incoming test for a hard drive should be? I'm thinking that we should just wipe them with all zeros, write out our image, and do a full drive read back. That would end up being only about 1 hour 45 minutes total. Given that drives do block remapping on their own, would what I've proposed show up any infant mortality just as well as running badblocks?

    Read the article

  • Use test to check for condition with find and execdir option

    - by slosd
    I think I can keep my question short. Why does the following command produce no output? find /usr/share/themes -mindepth 1 -maxdepth 1 -type d -execdir test -d {}/gnome-shell \; I expected it to print all folders in /usr/share/themes that contain a folder gnome-shell. Several websites suggest that this usage of test as a command in exec/execdir is possible. From man find: -exec command ; Execute command; true if 0 status is returned. [...]

    Read the article

  • How to facilitate code reviews in a small team for embedded software?

    - by Adam Lewis
    Short Question Does a cost-effective tool / workflow exist to facilitate code reviews in a small team? More specifically, a small team that relies on post-commit code reviews. Background Our team currently consists of 3 full time and 1 part time software engineers, with plans on hiring more in the near future. Due to our team size and volume of projects we all must juggle, the pre-commit workflow that major tools (such as Review Board and Code Collaborator) use is not obtainable for us right now. The best we can do at the moment is to perform post-commit reviews before major releases or as time permits. Nearly all of our projects are hosted on RepositoryHosting.com (which I highly recommend) and contain a mixture of SVN and GIT repositories. Current Thoughts Since I cannot find a tool that fits our needs right now, I am turning to TRAC that is built into our repository's site. At the moment we use TRAC to file tickets and track milestones, so to me this seems like a natural fit for code review results as well. The direction I am heading in right now is to use a spread sheet(s) to log all of the bugs and comments. Do some macro magic to get it in a format that I can use TRAC's import ticket method and use TRAC's ticketing system to create the action items / bug reports automatically. The auto ticket generation is darn near a must have, adding in bugs and comments one at a time from a web-gui is really painful. Secondary Question If this workflow makes sense, is there a good / standard template to use as a code review log?

    Read the article

  • Which metric/list should be used to evaluate whole software development team?

    - by adt
    Title might be seem vague, so let me tell you a little bit history what i am trying to clarify question. I have been hired as a consultant for a corporate's small developement divison ( The company also owns a couple of software dev. companies) My ex manager runs a BI team, with reportes, analyts and developers. He asked me to evaluate overall design, software developement process and code quality . Here what i found, Lots of copy/paste code everywhere ( no reuse ) Even though they have everything TFS, VS Ultimate etc, No Build process , No Cruise Control.net / team city... No unit tests Web Pages with 3700 lines of code, Lots of huge functions ( which can be divided into smaller one's ) No naming convention both db and c# code No 3r party or open source project No IoC No Seperation Of Concerns No Code Quality Check ( NDepend or FxCope or nothing ) No Code Review No Communication within the team They claim they wrote an application framework ( 6 months 3 persons), but I would hardly call a framework ( of course no unit test, there are some but all commented out). Framework contains 14 projects but there are some projects with 1 file 20 lines of code . Honestly, what people are doing fixing bug all thr day( which will provide more bugs eventually), they are kind of isolated from community, some team members even dont know github or stackoverflow they probably went there with google but they dont know about it. So here is question, Is This list ok ? Or am i being picky? Since I dont have any grudge against them, I just want to be fair, honest and I would like to hear you suggestions, before I would submit this list. And since this list also will be review by software division's manager, I dont want any heart break or something like this. http://www.hanselman.com/altnetgeekcode/ For example I would love to such lists, i cant make references. Thanks in advance.

    Read the article

  • Error TF31004 when connecting to TFS2008

    - by Ben
    I'm trying to connect to a TFS2008 server through Visual Studio 2008 (Tools\Connect to Team Foundation Server) and get this error when trying to add our server: TF31004: Team Foundation encountered an unexpected error while connecting to Team Foundation Server . Wait a few minutes and try again. If the problem persists, contact your Team Foundation Server administrator. Needless to say, waiting doesn't help. I've tried using the ip address instead of the hostname but get the same error. I can log in via a browser, in fact IE and Chrome both SSO me straight in. The server is only used for testing one of our TFS plugins, so doesn't get much real use.

    Read the article

  • Test run errors with MSTest in VS2010

    - by Tomas Lycken
    When I run my Unit Tests, all tests pass, but instead of "Test run succeeded" or whatever the success message is, I get "Test run error" in the little bar that tells me how many of my tests pass, even though all my tests passed. When i click the text, I'm taken to a page that tells me the following two things happened: Warning: conflict during test run deployment: deployment item '[...]\Booking.Web.dll' directly or indirectly referenced by the test container [...]\Booking.Web.Tests.dll cannot be deployed to 'Booking.Web.dll' because otherwise the file '[...]\Booking.Web.dll' would override deployment item '[...]\Booking.Web.dll' directly or indirectly referenced by '[...]\Booking.Web.Tests.dll' Error: Cannot initialize the ASP.NET project 'Booking.Web' Exception was thrown: The website could not be configured correctly; getting ASP.NET proccess information failed. Requesting 'http://localhost:54131/VSEnterpriseHelper.axd' returned an error: The remote server returned an error: (500) Internal Server Error. I don't understand half of what it's complaining about. How do I get rid of these errors? (And for reference: Booking.Web is an ASP.NET MVC 2 project, Booking.Web.Tests is a Test project, [...] is the full local path to the projects in my environment, in most of the cases above to the /bin/debug/ folder inside the Booking.Web project)

    Read the article

  • Setup Google Test (gtest) with Eclipse on OS X

    - by ejel
    What is the procedure to setup Google Test to work under Eclipse on Mac OS X? I followed the instruction in README to compile and install gtest as framework from XCode. Now I want to use gtest with Eclipse. Currently, it compiles fine but fails during build. I suppose Eclipse does not use framework concept as XCode does and need a different linking approach, but I'm not sure which files should I link to during build. g++ -L/usr/local/lib -L/usr/local/lib/libgtest.a -L/Library/Frameworks/gtest.framework -arch i386 -o "Raytracer" ./test/sample_test.o ./src/Raytracer.o Undefined symbols: "testing::Test::~Test()", referenced from: DemoTest_SANITY_Test::~DemoTest_SANITY_Test()in sample_test.o DemoTest_SANITY_Test::~DemoTest_SANITY_Test()in sample_test.o "testing::internal::AssertHelper::~AssertHelper()", referenced from: DemoTest_SANITY_Test::TestBody() in sample_test.o DemoTest_SANITY_Test::TestBody() in sample_test.o

    Read the article

  • Error when logging in with Machinist in Shoulda test

    - by user303747
    I am having some trouble getting the right usage of Machinist and Shoulda in my testing. Here is my test: context "on POST method rating" do p = Product.make u = nil setup do u = login_as post :vote, :rating => 3, :id => p end should "set rating for product to 3" do assert_equal p.get_user_vote(u), 3 end And here's my blueprints: Sham.login { Faker::Internet.user_name } Sham.name { Faker::Lorem.words} Sham.email { Faker::Internet.email} Sham.body { Faker::Lorem.paragraphs(2)} User.blueprint do login password "testpass" password_confirmation { password } email end Product.blueprint do name {Sham.name} user {User.make} end And my authentication test helper: def login_as(u = nil) u ||= User.make() @controller.stubs(:current_user).returns(u) u end The error I get is: /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/validations.rb:1090:in `save_without_dirty!': Validation failed: Login has already been taken, Email has already been taken (ActiveRecord::RecordInvalid) from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/dirty.rb:87:in `save_without_transactions!' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:200:in `save!' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `transaction' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:182:in `transaction' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:200:in `save!' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:208:in `rollback_active_record_state!' from /home/jason/moderndarwin/vendor/rails/activerecord/lib/active_record/transactions.rb:200:in `save!' from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist/active_record.rb:55:in `make' from /home/jason/moderndarwin/test/blueprints.rb:37 from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist.rb:77:in `generate_attribute_value' from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist.rb:46:in `method_missing' from /home/jason/moderndarwin/test/blueprints.rb:37 from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist.rb:20:in `instance_eval' from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist.rb:20:in `run' from /usr/lib/ruby/gems/1.8/gems/machinist-1.0.6/lib/machinist/active_record.rb:53:in `make' from ./test/functional/products_controller_test.rb:25:in `__bind_1269805681_945912' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:293:in `call' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:293:in `merge_block' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:288:in `initialize' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:169:in `new' from /home/jason/moderndarwin/vendor/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:169:in `context' from ./test/functional/products_controller_test.rb:24 I can't figure out what it is I'm doing wrong... I have tested the login_as with my auth (Authlogic) in my user_controller testing. Any pointers in the right direction would be much appreciated!

    Read the article

  • Anti-Joel Test

    - by Vaibhav Garg
    The Joel Test is a measure of how a team performs with regards to the best practices in coding. What questions, given a 'yes' answer, would subtract from the the Joel test score? (Assuming you don't simply negate the current questions on the 'Joel Test', ie: "Do you have no source control?") For example: Does the company insist on being very process heavy?

    Read the article

  • Visual Studio Wcf Test Client - entering an Int array

    - by WebDude
    Hi, I've found the Visual Studio WCF test client quite useful when it comes to a quick test of my WCF service. This is the test client found in this location relative to your Visual Studio install directory: \Common7\IDE\WcfTestClient.exe I have a few service calls that require a parameter of type System.Int32[] I can't seem to figure out what values to enter against for this parameter as i keep receiving the error '[value entered]' is not a valid value for this type Trying to enter the value 27 i have tried the following, but all fails 27 { 27 } new System.Int32[] { 27 } Can anyone please help with how to do this

    Read the article

  • Rails test db doesn't persist record changes

    - by nathan.f77
    I've been trying to solve a problem for a few weeks now. I am running rspec tests for my Rails app, and they are working fine except for one error that I can't seem get my head around. I am using MySQL with the InnoDB engine. I have set config.use_transactional_fixtures = true in spec_helper.rb I load my test fixtures manually with the command rake spec:db:fixtures:load. The rspec test is being written for a BackgrounDRb worker, and it is testing that a record can have its state updated (through the state_machine gem). Here is my problem: I have a model called Listings. The rspec test calls the update_sold_items method within a file called listing_worker.rb. This method calls listing.sell for a particular record, which sets the listing record's 'state' column to 'sold'. So far, this is all working fine, but when the update_sold_items method finishes, my rspec test fails here: listing = Listing.find_by_listing_id(listing_id) listing.state.should == "sold" expected: "sold", got: "current" (using ==) I've been trying to track down why the state change is not persisting, but am pretty much lost. Here is the result of some debugging code that I placed in the update_sold_items method during the test: pp listing.state # => "current" listing.sell! listing.save! pp listing.state # => "sold" listing.reload pp listing.state # => "current" I cannot understand why it saves perfectly fine, but then reverts back to the original record whenever I call reload, or Listing.find etc. Thanks for reading this, and please ask any questions if I haven't given enough information. Thanks for your help, Nathan B P.S. I don't have a problem creating new records for other classes, and testing those records. It only seems to be a problem when I am updating records that already exist in the database.

    Read the article

  • Ruby Iconv works with irb and ruby debugger but not in a unit test

    - by Mark B
    I'm running Ruby 1.8.7 with Rails 2.3.5 on Ubuntu 10.04 64-bit. I've written a method that should take a string like this, "École À la Découverte" and output a file-system name like this "ecole_a_la_decouverte": (Iconv.new('US-ASCII//TRANSLIT', 'utf-8').iconv "École À la Découverte").gsub(/[^\w\s-\—]/,'').gsub(/[^\w]|[_]/,' ').split.join('_').downcase When I test this line in my code, the test always fails saying that "cole_la_dcouverte" is unequal to "ecole_a_la_decouverte". The odd thing is that if I insert a debugger line and use the debugger console the test passes. As well, running this line manually in irb seems to work. Does anyone know what's going on and why this test is failing? My only thought is that including the debugger or irb somehow adds more support for UTF-8 but I'm at a loss to figure out where to go next. Thanks in advance!

    Read the article

  • No tests found with test runner 'JUnit 4'

    - by lamisse
    Hello , my java test worked well from Eclipse ,I do not know I relaunch test from run menu and then have the following message: No tests found with test runner 'JUnit 4' In the .classpath file I have all jar files, and at the end have : <classpathentry exported="true" kind="con" path="org.eclipse.jdt.junit.JUNIT_CONTAINER/4"/> <classpathentry kind="output" path="bin"/> </classpath> please help!

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >