Search Results

Search found 4935 results on 198 pages for 'organizational unit'.

Page 41/198 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • PHPUnit - multiple stubs of same class

    - by keithjgrant
    I'm building unit tests for class Foo, and I'm fairly new to unit testing. A key component of my class is an instance of BarCollection which contains a number of Bar objects. One method in Foo iterates through the collection and calls a couple methods on each Bar object in the collection. I want to use stub objects to generate a series of responses for my test class. How do I make the Bar stub class return different values as I iterate? I'm trying to do something along these lines: $stubs = array(); foreach ($array as $value) { $barStub->expects($this->any()) ->method('GetValue')) ->will($this->returnValue($value)); $stubs[] = $barStub; } // populate stubs into `Foo` // assert results from `Foo->someMethod()` So Foo->someMethod() will produce data based on the results it receives from the Bar objects. But this gives me the following error whenever the array is longer than one: There was 1 failure: 1) testMyTest(FooTest) with data set #2 (array(0.5, 0.5)) Expectation failed for method name is equal to <string:GetValue> when invoked zero or more times. Mocked method does not exist. /usr/share/php/PHPUnit/Framework/MockObject/Mock.php(193) : eval()'d code:25 One thought I had was to use ->will($this->returnCallback()) to invoke a callback method, but I don't know how to indicate to the callback which Bar object is making the call (and consequently what response to give). Another idea is to use the onConsecutiveCalls() method, or something like it, to tell my stub to return 1 the first time, 2 the second time, etc, but I'm not sure exactly how to do this. I'm also concerned that if my class ever does anything other than ordered iteration on the collection, I won't have a way to test it.

    Read the article

  • simpletest - Why does setReturnValue() seem to change behaviour depending whether test is run in iso

    - by JW
    I am using SimpleTest version 1.0.1 for a unit test. I create a new mock object within a test method and on it i do: $MockDbAdaptor->setReturnValue('query',1); Now, when i run this in a standalone unit test my tested object is happy to see 1 returned when query() is called on the mock db adaptor. However, when this exact same test is run as part of my 'all_tests' TestSuite, the test is failing. This happens because a call to the mock's query() method does not appear to return any value - thus causing my test subject to complain and trigger an unexpected exception that fails the test. So, the behaviour of setReturnValue() seems to change depending on whether the test is run in isolation or not. I can get it to work in both a standalone and TestSuite contexts by using this instead: $MockDbAdaptor->setReturnValueAt(0,'query',1); So my immediate problem can be fixed ...but it feels like a hack. I thought if i create a new mock within a test method then why is the setReturnValue() behaviour getting affected by the context in which the test class instance is run? It feel like a bug.

    Read the article

  • Convert a Unit Vector to a Quaternion

    - by Hmm
    So I'm very new to quaternions, but I understand the basics of how to manipulate stuff with them. What I'm currently trying to do is compare a known quaternion to two absolute points in space. I'm hoping what I can do is simply convert the points into a second quaternion, giving me an easy way to compare the two. What I've done so far is to turn the two points into a unit vector. From there I was hoping I could directly plug in the i j k into the imaginary portion of the quaternion with a scalar of zero. From there I could multiply one quaternion by the other's conjugate, resulting in a third quaternion. This third quaternion could be converted into an axis angle, giving me the degree by which the original two quaternions differ by. Is this thought process correct? So it should just be [ 0 i j k ]. I may need to normalize the quaternion afterwards, but I'm not sure about that. I have a bad feeling that it's not a direct mapping from a vector to a quaternion. I tried looking at converting the unit vector to an axis angle, but I'm not sure this would work, since I don't know what angle to give as an input.

    Read the article

  • Visual Studio 2010 Professional - Problem Unit-Testing Web Services

    - by Ben
    Have created a very simple Web Service (asmx) in Visual Studio 2010 Professional, and am trying to use the auto-generated unit test cases. I get something that seems quite familiar on this site: The web site could not be configured correctly; getting ASP.NET process information failed. Requesting http://localhost:81/zfp/VSEnterpriseHelper.axd return an error: The remote server returned an error: (500) Internal Server Error. http://stackoverflow.com/questions/260432/500-error-running-visual-studio-asp-net-unit-test I have tried: 1. Running the tests on IIS rather than ASP.NET Development Server 2. Adding and then removing the XML fragment to my Web Service's .config file 3. Giving the MACHINE\ASPNET account Full control to the local folder My current questions: 1. Why am I being bothered with this instrumentation / code coverage DLL, when this doesn't seem to be something that ships with Visual Studio 2010 Professional? Is there any way I can turn it off? 2. I'm placing the node under in Web.config - is that the correct node? 3. Is it possible to bind to a web service without using the webby test attributes? I've seen other people advising making the Web Service as light-weight as possible. I'm trying to call it with jQuery / AJAX / JSON, so being able to debug the actual web service would be really helpful. Best wishes, Ben

    Read the article

  • DSL to generate test data

    - by queen3
    There're several ways to generate data for tests (not only unit tests), for example, Object Mother, builders, etc. Another useful approach is to write test data as plain text: product: Main; prices: 145, 255; Expire: 10-Apr-2011; qty: 2; includes: Sub product: Sub; prices: 145, 255; Expire: 10-Apr-2011; qty: 2 and then parse it into C# objects. This is easy to use in unit tests (because deep inner collections can be written in single line), this is even more convenient to use in FitNesse-like system (because this DSL naturally fits into wiki), and so on. So I use this and write parser, but it's tedious to write each time. I'm not a big expert in DSL/language parsers, but I think they can help here. What would be the right one to use? I only heard about: DSL (I mean, any DSL) Boo (that I think can do DSL) ANTLR but I don't even know which one to pick and where to start. So the question: is it reasonable to use some kind of DSL to generate test data? What would you suggest to do so? Are there any existing cases?

    Read the article

  • Automatic testing of GUI related private methods

    - by Stein G. Strindhaug
    When it comes to GUI programming (at least for web) I feel that often the only thing that would be useful to unit test is some of the private methods*. While unit testing makes perfect sense for back-end code, I feel it doesn't quite fit the GUI classes. What is the best way to add automatic testing of these? * Why I think the only methods useful to test is private: Often when I write GUI classes they don't even have any public methods except for the constructor. The public methods if any is trivial, and the constructor does most of the job calling private methods. They receive some data from server does a lot of trivial output and feeds data to the constructor of other classes contained inside it, adding listeners that calls a (more or less directly) calls the server... Most of it pretty trivial (the hardest part is the layout: css, IE, etc.) but sometimes I create some private method that does some advanced tricks, which I definitely do not want to be publicly visible (because it's closely coupled to the implementation of the layout, and likely to change), but is sufficiently complicated to break. These are often only called by the constructor or repeatedly by events in the code, not by any public methods at all. I'd like to have a way to test this type of methods, without making it public or resorting to reflection trickery. (BTW: I'm currently using GWT, but I feel this applies to most languages/frameworks I've used when coding for GUI)

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • Using ZLib unit to compress files vs using ZipForge

    - by user193655
    There are many questions on zipping in Delphi, anyway this is not a duplicate. I am using ZipForge for zip/unzip capability in my application. Currently I use 2 features of ZipForge: 1) zip and unzip (!) 2) password protect the archives Now I am removing the password from all the archives so I need only to zip and unzip files. I zip them just for minimizing bandwith when uploading/downloading files from the server. So my idea is to process all files once for unzipping them (with password) and rezipping them without password. I have nothing against ZipForge, anyway it is an extra component, every time I upgrade to a newest Delphi version I have to wait for the new IDE support and moreover the more components the more problems during the installation. So since what I do is very simple I'd like to replace ZipForge with 2 simple functinos using the ZLib unit. I found (and tested) the functions here on Torry's. What do you think of using Zlib unit? Do you see any potential problem that I would not have with ZipForge? Can you comment on speed?

    Read the article

  • Maven Cobertura: unit test failed but build success

    - by Pavel Drobushevich
    Hi all, I've configured cobertura code coverage in my pom: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.4</version> <configuration> <instrumentation> <excludes> <exclude>**/*Exception.class</exclude> </excludes> </instrumentation> <formats> <format>xml</format> <format>html</format> </formats> </configuration> </plugin> And start test by following command: mvn clean cobertura:cobertura But if one of unit test fail Cobertura only log this information and doesn't mark build fail. Tests run: 287, Failures: 1, Errors: 1, Skipped: 0 Flushing results... Flushing results done Cobertura: Loaded information on 139 classes. Cobertura: Saved information on 139 classes. [ERROR] There are test failures. ................................. [INFO] BUILD SUCCESS How to configure Cobertura marks build failed in one of unit test fail? Thanks in advance.

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • Seeking suggestions on redesigning the interface

    - by ratkok
    As a part of maintaining large piece of legacy code, we need to change part of the design mainly to make it more testable (unit testing). One of the issues we need to resolve is the existing interface between components. The interface between two components is a class that contains static methods only. Simplified example: class ABInterface { static methodA(); static methodB(); ... static methodZ(); }; The interface is used by component A so that different methods can use ABInterface::methodA() in order to prepare some input data and then invoke appropriate functions within component B. Now we are trying to redesign this interface for various reasons: Extending our unit test coverage - we need to resolve this dependency between the components and stubs/mocks are to be introduced The interface between these components diverged from the original design (ie. a lots of newer functions, used for the inter-component i/f are created outside this interface class). The code is old, changed a lot over the time and needs to be refactored. The change should not be disruptive for the rest of the system. We try to limit leaving many test-required artifacts in the production code. Performance is very important and should be no (or very minimal) degradation after the redesign. Code is OO in C++. I am looking for some ideas what approach to take. Any suggestions on how to do this efficiently?

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • method used like a type error in a unit test

    - by Josepth Vodary
    I am trying to unit test a simple factory - but it keeps telling me that I am trying to use a method like a type? My unit test using System; using System.Text; using System.Collections.Generic; using System.Linq; using Microsoft.VisualStudio.TestTools.UnitTesting; using Home; namespace HomeTest { [TestClass] public class TestFactory { [TestMethod] public void DoTestFactory() { InventoryType.InventorySelect select = new InventoryType.InventorySelect(); select.inventoryTypes.Add("cds"); Home.Services.Factory.CreateInventory get = new Home.Services.Factory.CreateInventory(); get.InventoryImpl(); if (select.Validate() == true) Console.WriteLine("Test Passed"); else if (select.Validate() == false) Console.WriteLine("Test Returned False"); else Console.WriteLine("Test Failed To Run"); Console.ReadLine(); } } } My facotry using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Home.Services { public class Factory { public InventorySvc CreateInventory() { return new InventoryImpl(); } } }

    Read the article

  • Ability to switch Persistence Unit dynamically within the application (JPA)

    - by MVK
    My application data access layer is built using Spring and EclipseLink and I am currently trying to implement the following feature - Ability to switch the current/active persistence unit dynamically for a user. I tried various options and finally ended up doing the following. In the persistence.xml, declare multiple PUs. Create a class with as many EntityManagerFactory attributes as there are PUs defined. This will act as a factory and return the appropriate EntityManager based on my logic public class MyEntityManagerFactory { @PersistenceUnit(unitName="PU_1") private EntityManagerFactory emf1; @PersistenceUnit(unitName="PU_2") private EntityManagerFactory emf2; public EntityManager getEntityManager(int releaseId) { // Logic goes here to return the appropriate entityManeger } } My spring-beans xml looks like this.. <!-- First persistence unit --> <bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="emFactory1"> <property name="persistenceUnitName" value="PU_1" /> </bean> <bean class="org.springframework.orm.jpa.JpaTransactionManager" id="transactionManager1"> <property name="entityManagerFactory" ref="emFactory1"/> </bean> <tx:annotation-driven transaction-manager="transactionManager1"/> The above section is repeated for the second PU (with names like emFactory2, transactionManager2 etc). I am a JPA newbie and I know that this is not the best solution. I appreciate any assistance in implementing this requirement in a better/elegant way! Thanks!

    Read the article

  • What's a good way to do testing a plug-in on multiple Windows and Outlook versions?

    - by Andrei
    Hello, We're building a plug-in for Outlook that should work on multiple Windows versions (XP, Vista, 7) and also with different Outlook versions (2003, 2007, 2010). The testing problem I am facing right now, is that I can't figure out a good/convenient/thorough way to test the application on multiple Windows and Outlook versions. At the moment, I have a VirtualBox which runs many virtual machines, with different Windows versions and Outlook versions. So I would have a virtual machine with Windows 7 testing Outlook 2010, and another one with Windows 7 testing Outlook 2007, Windows Vista with Outlook 2010 and so on, going through some of the possible combinations. It kind of gets the job done, although it is cumbersome and takes a long time to test. Some of the testing included in the application is unit testing, but this is also rather tied in with the machine I test it on (windows 7 with outlook 2010). For example, I was using ManagementObject recently, which worked fine on my system (and thus passed the unit test for that method), however, using that object threw an exception in another person's system, which crashed the application. I work on Visual Studio 2010 Ultimate. The questions: Is there a more elegant way to make the testing process more streamline and more efficient? Any other testing methods you recommend? How would you deal with this problem? Thanks! Looking forward to your replies.

    Read the article

  • How can I split abstract testcases in JUnit?

    - by Willi Schönborn
    I have an abstract testcase "AbstractATest" for an interface "A". It has several test methods (@Test) and one abstract method: protected abstract A unit(); which provides the unit under testing. No i have multiple implementations of "A", e.g. "DefaultA", "ConcurrentA", etc. My problem: The testcase is huge (~1500 loc) and it's growing. So i wanted to split it into multiple testcases. How can organize/structure this in Junit 4 without the need to have a concrete testcase for every implementation and abstract testcase. I want e.g. "AInitializeTest", "AExectueTest" and "AStopTest". Each being abstract and containing multiple tests. But for my concrete "ConcurrentA", i only want to have one concrete testcase "ConcurrentATest". I hope my "problem" is clear. EDIT Looks like my description was not that clear. Is it possible to pass a reference to a test? I know parameterized tests, but these require static methods, which is not applicable to my setup. Subclasses of an abstract testcase decide about the parameter.

    Read the article

  • NUll exception in filling a querystring by mocing framework

    - by user564101
    There is a simple controller that a querystring is read in constructor of it. public class ProductController : Controller { parivate string productName; public ProductController() { productName = Request.QueryString["productname"]; } public ActionResult Index() { ViewData["Message"] = productName; return View(); } } Also I have a function in unit test that create an instance of this Controller and I fill the querystring by a Mock object like below. [TestClass] public class ProductControllerTest { [TestMethod] public void test() { // Arrange var querystring = new System.Collections.Specialized.NameValueCollection { { "productname", "sampleproduct"} }; var mock = new Mock<ControllerContext>(); mock.SetupGet(p => p.HttpContext.Request.QueryString).Returns(querystring); var controller = new ProductController(); controller.ControllerContext = mock.Object; // Act var result = controller.Index() as ViewResult; // Assert Assert.AreEqual("Index", result.ViewName); } } Unfortunately Request.QueryString["productname"] is null in constructor of ProductController when I run test unit. Is ther any way to fill a querystrin by a mocking and get it in constructor of a control?

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • Single hardware unit to protect web servers and implement smart publishing

    - by Maxim V. Pavlov
    Thus far we've been using the combination of Forefront TMG 2010 as an edge firewall + intrusion prevention system + web site publishing mechanism in the data center to work with a few web server machines. Since we develop on ASP.NET, we are IIS and in general - Microsoft crowd. Since TMG is being deprecated, we need to come up with a hardware alternative to protect and serve our data center web cloud. Could you please advise a hardware or virtual appliance solution that can provide routing, flood prevention and smart web-site publishing (one IP - many web sites based on domain name filter) all in one. Even if it is hard to configure, as long as it covers all these features, we will invest to learn and replace TMG eventually.

    Read the article

  • Is my Power Supply Unit fried?

    - by Rob
    So my laptop just started smoking where I plug my charger in. I run a netbook, the Acer Aspire One. My charger won't charge it, because when I plug my charger into the netbook, the charging light that indicates the netbook is receiving power turns off. Is there anything I can do to troubleshoot where the problem is and what should I do now? Is just my charger fried and should I replace it. Or could there also be a problem with my netbook and do I need to look for other problems?

    Read the article

  • what does the 'm' unit in munin mean?

    - by nbv4
    I'm using munin as a tool for monitoring my servers. On some of the graphs, the units are marked with a 'm'. For instance, my apache accesses graph is labeled 100m, 200m, 300m, along the y-axis. What does the 'm' mean? I understand 'M' (caps) is mega as in megabytes, the 'k' is kilo, the 'G' is giga, but what about 'm'? At first I thought it was million, but theres no way apache is serving 100 million accesses even per decade.

    Read the article

  • Is my Power Supply Unit fried?

    - by Rob
    So my laptop just started smoking where I plug my charger in. I run a netbook, the Acer Aspire One. My charger won't charge it, because when I plug my charger into the netbook, the charging light that indicates the netbook is receiving power turns off. Is there anything I can do to troubleshoot where the problem is and what should I do now? Is just my charger fried and should I replace it. Or could there also be a problem with my netbook and do I need to look for other problems?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >