Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 9/192 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • PHPUnit reporting "Aborted" no matter what tests are run

    - by GrumpyCanuck
    Having a weird problem with PHPUnit. We're using PHPUnit as part of a continuous integration environment, that contains one app written using Zend Framework and one app written using CodeIgniter. Unit tests run just fine under Zend Framework, but whenever I run the tests for CodeIgniter using fooStack's CIUnit bridge, I always get the same problem at the end: PHPUnit 3.4.14 by Sebastian Bergmann. ............... . Time: 1 second, Memory: 7.00Mb OK (16 tests, 14 assertions) Aborted First off, I do not know what those empty spaces between the . means. Secondly, no matter what test I run (all of them or each one separately) I get the same Aborted message at the very end. The tests themselves do not contain any exit or die statements. When I run the same version of PHPUnit on my laptop (running OS-X Snow Leopard and same version of Zend Server Community Edition) I do not get that aborted message. Running PHP 5.3.2 on Ubuntu installed using Zend Server Community Edition. Any help with this would be greatly appreciated.

    Read the article

  • How to skip certain tests with Test::Unit

    - by Daniel Abrahamsson
    In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite. My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test. The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality. As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • How to set locale default_url_options for functional tests (Rails)

    - by insane.dreamer
    In my application_controller, I have the following set to include the locale with all paths generated by url_for: def default_url_options(options={}) { :locale => I18n.locale } end My resource routes then have a :path_prefix = "/:locale" Works fine on the site. But when it comes to my functional tests, the :locale is not passed with the generated urls, and therefore they all fail. I can get around it by adding the locale to the url in my tests, like so: get :new, :locale => 'en' But I don't want to have to manually add the locale to every functional test. I tried adding the default_url_options def above to test_helper, but it seems to have no effect. Is there any way I can change the default_url_options to include the locale for all my tests? Thanks.

    Read the article

  • Best practices for file system dependencies in unit/integration tests

    - by Olvagor
    I just started writing tests for a lot of code. There's a bunch of classes with dependencies to the file system, that is they read CSV files, read/write configuration files and so on. Currently the test files are stored in the test directory of the project (it's a Maven2 project) but for several reasons this directory doesn't always exist, so the tests fail. Do you know best practices for coping with file system dependencies in unit/integration tests? Edit: I'm not searching an answer for that specific problem I described above. That was just an example. I'd prefer general recommendations how to handle dependencies to the file system/databases etc.

    Read the article

  • JUnit tests for POJOs

    - by Ryan Thames
    I work on a project where we have to create unit tests for all of our simple beans (POJOs). Is there any point to creating a unit test for POJOs if all they consist of is getters and setters? Is it a safe assumption to assume POJOs will work about 100% of the time? Duplicate of - Should @Entity Pojos be tested? See also Is it bad practice to run tests on a DB instead of on fake repositories? Is there a Java unit-test framework that auto-tests getters and setters?

    Read the article

  • Typical size of unit tests compared to test code

    - by Frank Schwieterman
    I'm curious what a reasonable / typical value is for the ratio of test code to production code when people are doing TDD. Looking at one component, I have 530 lines of test code for 130 lines of production code. Another component has 1000 lines of test code for 360 lines of production code. So the unit tests are requiring roughly 3x to 5x as much code. This is for Javascript code. I don't have much tested C# code handy, but I think for another project I was looking at 2x to 3x as much test code then production code. It would seem to me that the lower that value is, assuming the tests are sufficient, would reflect higher quality tests. Pure speculation, I just wonder what ratios other people see. I know lines of code is an loose metric, but since I code in the same style for both test and production (same spacing format, same ammount of comments, etc) the values are comparable.

    Read the article

  • Hudson CI project doesn't run NetBeans JUnit tests of dependent projects

    - by Liron Yahdav
    I have a set of NetBeans java projects with dependencies between them. I added the project at the top of the dependency tree to Hudson for continuous integration. Everything works fine, except that the unit tests of dependent projects don't get run by Hudson. This is because the ant scripts that NetBeans creates has dependent projects setup to run the "jar" target and not a target that also runs the unit tests. I could add ant build steps for each dependent project in Hudson to run the unit tests, but I was hoping there's a simpler solution.

    Read the article

  • Delete or comment out non-working JUnit tests?

    - by Chris Knight
    I'm currently building a CI build script for a legacy application. There are sporadic JUnit tests available and I will be integrating a JUnit execution of all tests into the CI build. However, I'm wondering what to do with the 100'ish failures I'm encountering in the non-maintained JUnit tests. Do I: 1) Comment them out as they appear to have reasonable, if unmaintained, business logic in them in the hopes that someone eventually uncomments them and fixes them 2) Delete them as its unlikely that anyone will fix them and the commented out code will only be ignored or be clutter for evermore 3) Track down those who have left this mess in my hands and whack them over the heads with the printouts of the code (which due to long-method smell will be sufficently suited to the task) while preaching the benefits of a well maintained and unit tested code base

    Read the article

  • NUnit not running Suite tests

    - by Assaf Lavie
    I've created a test suite in NUnit that references several distinct unit test fixtures in various assemblies. I've pretty much used the example code from NUnit's docs: namespace NUnit.Tests { using System; using NUnit.Framework; using System.Collections; public class AllTests { [Suite] public static IEnumerable Suite { get { ArrayList suite = new ArrayList(); suite.Add(new VisionMap.DotNet.Tests.ManagedInteropTest.DotNetUtilsTest()); return suite; } } } } My goal is to add several tests to the list above so I can run them all in a batch. But when I try to load the DLL in NUnit's GUI I get this: What am I doing wrong? I'm using nunit 2.5.0.9122.

    Read the article

  • Maven2 compiles my tests, but doesn't run them

    - by Vincenzo
    I have a simple Maven2 project with tests written for TestNG. When I say mvn test Maven2 compiles my test, but don't run them. I already checked this page: http://maven.apache.org/general.html#test-property-name. This is not my case. Anybody can help? My directory structure: pom.xml src main java com ... test java com ... target classes <— .class files go there test-classes <— .class files with tests go there This is what I see if I run mvn -X test (end of the log): ... [INFO] Surefire report directory: <mydir>/target/surefire-reports Forking command line: /bin/sh -c cd <mydir> && /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/bin/java -jar /var/folders/+j/+jAx0g2xGA8-Ns9lWNOWgk+++TM/-Tmp-/surefirebooter7645642850235508331.jar /var/folders/+j/+jAx0g2xGA8-Ns9lWNOWgk+++TM/-Tmp-/surefire4544000548243268568tmp /var/folders/+j/+jAx0g2xGA8-Ns9lWNOWgk+++TM/-Tmp-/surefire7481499683857473873tmp ------------------------------------------------------- T E S T S ------------------------------------------------------- Running TestSuite Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec

    Read the article

  • .NET TDD with a Database and ADO.NET Entity Framework - Integration Tests

    - by Brian
    Hello, I'm using ADO.NET entity framework, and am using an AdventureWorks database attached to my local database server. For unit testing, what approaches have people taken to work with a database? Obviously, the database has to be in a pre-defined state of change so that the tests can have some isolation from each other... so I need to be able to run through the inserts and updates, then rollback either between tests or after the batch of tests are done. Any advice? Thanks.

    Read the article

  • Silverlight Unit Testing Framework running tests in external class library

    - by Jonas Follesø
    I'm currently looking into different options for unit testing Silverlight applications. One of the frameworks available is the Silverlight Unit Test Framework from Microsoft (developed primary by Jeff Wilcox, http://www.jeff.wilcox.name/2010/05/sl3-utf-bits/). One of the scenarios I'm looking into is running the same tests on both Silverlight 3 (PC) and Windows Phone 7. The Silverlight Unit Test Framework (SLUT) runs on both PC and phone. To prevent having to copy or link files I would like to put my tests into a shared test library, that can be loaded by either a WP7 application using the SLUT, or a Silverlight 3 application using SLUT. So my question is: will SLUT load unit tests defined in a referenced class library, or only in the executing assembly?

    Read the article

  • Running junit tests in parallel ?

    - by krosenvold
    I'm using junit 4.4 and maven and I have a large number of long-running integration tests. When it comes to parallellizing test suites there are a few solutions that allow me to run each test method in a single test-class in parallel. But all of these require that I change the tests in one way or another. I really think it would be a much cleaner solution to run X different test classes in X threads in parallel. I have hundreds of tests so I don't really care about threading individual test-classes. Is there any way to do this ?

    Read the article

  • How to access project files from NUnit tests

    - by Daren Thomas
    I have some Tests that I run with ReSharpers "Run All Tests from Solution" feature. One of the classes being tested has a dependency on a file in the same folder as the assembly containing it. This file is copied to the output directory via MSBuild (set "Copy To Output Directory" to "Copy always"). Problem: The tests are not being run from the normal assembly output directory, but instead some temporary location in my user profile. Therefore, I don't really know where to look for the file - the test runner does not copy it there. Can I force it to?

    Read the article

  • who wrote 250k tests for webkit?

    - by amwinter
    assuming a yield of 3 per hour, that's 83000 hours. 8 hours a day makes 10,500 days, divide by thirty to get 342 mythical man months. I call them mythical because writing 125 tests per person per week is unreal. can any wise soul out there on SO shed some light on what sort of mythical men write unreal quantities of tests for large software projects? thank you. update chrisw thinks there are only 20k tests (check out his explanation below). PS I'd really like to hear from folks who have worked on projects with large test bases

    Read the article

  • environment configuration for tests running in NUnit

    - by Frank Schwieterman
    I have some integration tests that hit a webserver and verify certain functionalities. Depending on the build environment, the server will be at a different address (http://localhost:8080/, http://test-vm/, etc). I would like to run these tests from a TFS build. I'm wondering whats the appropriate way to configure these tests? Would I just add a setting to the config file? I'm doing that currently. Incidentally we do have a separate branch per test environment, so I could have a different config file checked in for each environment. I wonder if there is a better way though? I'd like the build project to be able to tell the test what server to test. This seems better because then I don't have to maintain config information on a per branch basis. I believe I'd be using NUnit for Team Build (http://nunit4teambuild.codeplex.com/) to get NUnit/TFS to play together.

    Read the article

  • Framework/tool for processing C++ unit tests with numerical output

    - by David Claridge
    Hi, I am working on a C++ application that uses computer vision techniques to identify various types of objects in a sequence of images. The (1000+) images have been hand-classified, so we have an XML file for each image containing a description of where the objects are actually located in the images. I would like to know if there is a testing framework that can understand/graph results from tests that are numeric, in this case some measure of the error in the program's classification of the images, rather than just pass/fail style unit tests. We would like to use something like CDash/CTest for running these automated tests, and viewing over time how improvements to the vision algorithms are causing the images to be more correctly classified. Does anyone know of a tool/framework that can do this?

    Read the article

  • JUnit unable to find tests in Eclipse

    - by mikera
    I have a strange issue with JUnit 4 tests in Eclipse 3.5 that I couldn't solve - any hints gratefully received! Initially: I had a test suite working properly, with 100+ tests all configured with JUnit 4 annotations. I'd run these typically by right clicking on my source folder and selecting "Run as JUnit test". All worked perfectly. Now: When I try to run the test messages all I get is an error "No tests found with test runner 'JUnit 4'". Any idea what is happening? I simply can't work out what could have changed to make this fail. My guess is that it is some configuration issue based on the build path or class path?

    Read the article

  • How to find Junit tests that are using a given Java method directly or indirectly

    - by IT-Worx
    Assume there are Java project and Junit project in an Eclipse workspace. And All the unit tests are located in the Junit project and dependent on the application Java project. When making changes to a Java method, I need to find the unit tests that are using the method directly or indirectly, so that I can run the corresponding tests locally in my PC before checking into source control. I don't want to run the entire junit project since it takes time. I could use Eclipse call hierarchy to expand caller methods one by one until I find a test method. But for a project including more than 1 million lines of source code, digging down the call hierarchy takes time too. The search scope within call hierarchy view doesn't seem help much. Appreciate any help.

    Read the article

  • Dev Environment Tests Not 100% Compatible with Staging/Production in Rails

    - by aronchick
    We use a bunch of specific apps/APIs that (unfortunately) differ quite a bit from dev to staging/production. We use tests and continuous integration at each stage, but in dev, the tests fail annoyingly (throwing dialogs, etc - thanks Windows for the 64-bit notification!). I hate to write custom code, but are there some best practices for how to allow a subset of testing in ruby/rails - or for patching out specific tests when you're running on Windows? Some specific situations that: Identify.exe does not support 64-bit Windows and throws a dialog. Sethostname is not supported, and throws an error (at least it's command line).

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

  • Intel site says VT-x is supported on my CPU, but tests say otherwise

    - by Anshul
    I have a laptop with an Intel 2nd Gen. i7-2729QM (http://ark.intel.com/products/50067/) and the Intel site says that it supports VT-x, but I've downloaded the Intel Processor Identification Utility (http://www.intel.com/support/processors/tools/piu/sb/CS-014921.htm) and the test says that my CPU does not have VT-x. There is no option in my BIOS that allows me to enable/disable VT-x. I researched my laptop model (SAGER NP8170) and most forums say that it's enabled by default and there's no option in the BIOS. So assuming that's true, what gives? I also downloaded another tool called SecurAble from GRC and it also says that my CPU doesn't support VT-x. VirtualBox also says that my CPU does not support virtualization. My mind is boggled by why it says on the Intel site that my CPU supports VT-x but all other tests show otherwise. Anyone know what's going on? Thanks in advance.

    Read the article

  • Host forwarding fails, server is up, domain name tests ambiguous

    - by jayunit100
    I have a domain name registered with http://www.registryrocket.com/ The "main" site, which is called rudolfcode.net, is registered under godaddy, and forwards to a heroku site (rudolfcode.herokuapp.com). I have found that the main site, rudolfcode.net works, but the hostgator forwarding has stopped working (firefox simply fails when you point to http://www.rudolflabs.com, which is the domain name registered by hostgator). How can I debug this issue ? Finally, I have tried to run some DNS tests, and here are the results : Im not sure what the failures mean .... But Im pretty sure that "Conecting to WWW Home Page" failed is a pretty bad sign ! Thanks.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >