Search Results

Search found 26146 results on 1046 pages for 'white box testing'.

Page 108/1046 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Best practice to test a web application, regarding domain name and integration with external service

    - by ycseattle
    I have run into these problems several times and was never able to find a comfortable solution. Let's say my website has the domain name MyDomain.com. When I run the tests on the test machine (a continuous integration server), I will modify the HOSTS file on this machine so the MyDomain.com is mapped to this local machine instead of the real production server. This doesn't work very well for many situations. For example, my application will create subdomain names user1.MyDomain.com dynamically but this is difficult to keep the testing flexible. Another problem is my web application will interact with Amazon S3, and sometimes other service like Amazon Simple Message Queue. I am only comfortable to include these interaction in my tests but I am never happy with my solution for mixing testing and production on Amazon services. Could somebody offer some tips on these issues? I would like to make my testing framework clean and flexible. I am sure this is a common question for all web applications and there must be a mature way to deal with these. Thanks!

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • Accessing Web.config directly in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve?

    Read the article

  • Code promotion: Enforcing the rules

    - by jbarker7
    So here is our problem: We have a small team of developers with their own ways of doing things-- I am trying to formalize a process in which we are required to promote our code in the following order: Local sandbox Dev UAT Staging Live Developers develop/test as they go on their own sandbox, Dev is its own box that we would use for continuous integration, UAT is another site in IIS on the dev box, which uses our dev database. We then promote to staging, which is a site in IIS on the Live box and using live data (just like the live, hence staging). Then, finally, we promote to live. Here are a few of my questions: 1.) Does this seem to be best practice? If not, what needs to be done differently? 2.) How do I enforce the rules to the developers? Often developers skip steps in order to save time... this should not be tolerated and would be great if it could be physically enforced. 3.) How do I enforce these rules to the business group? The business group just wants to get features out FAST. Do we promote only on certain days? Thanks! Josh

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Finding patterns of failure in a Unit Test

    - by Pekka
    I'm new to Unit Testing, and I'm only getting into the routine of building test suites. I have what is going to be a rather large project that I want to build tests for from the start. I'm trying to figure out general strategies and patterns for building test suites. When you look at a class, many tests come to you obviously due to the nature of the class. Say for a "user account" class with basic CRUD operations, being related to a database table, we will want to test - well, the CRUD. creating an object and seeing whether it exists query its properties change some properties change some properties to incorrect values and delete it again. As for how to break things, there are "fail" tests common to most CRUD classes like: Invalid input data types A number as the ID key that exceeds the range of the chosen data type Input in an incorrect character encoding Input that is too long And so on and so on. For a unit test concerned with file operations, the list of "breaking things" could be Invalid characters in file name File name too long File name uses incorrect protocol or path I'm pretty sure similar patterns - applicable beyond the unit test one is currently working on - can be found for most units that are being tested. Now my question is: Am I correct in seeing such "breaking patterns"? Or am I getting something completely wrong about Unit testing, and if I did it right, this wouldn't be an issue at all? Is Unit Testing as a process of finding as many ways to break the unit as possible the right way to go? If I am correct: Are there existing definitions, lists, cheat sheets for such patterns? Are there any provisions (mainly in PHPUnit, as that's the framework I'm working in) to automate such patterns? Is there any assistance - in the form of check lists, or software - to aid in writing complete tests?

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • Is it necessary to burn-in RAM for server-class systems?

    - by ewwhite
    When using server-class systems with ECC RAM, is it necessary or even useful to burn-in the memory DIMMs prior to deployment? I've encountered an environment where all server RAM is placed through a lengthy multi-day burn-in/stress-tesing process. This has delayed system deployments on occasion and adds an extra step to the hardware lead-time. The server hardware is primarily Supermicro, so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant. Is this process useful? In my past experience, I simply used vendor RAM out of the box. Isn't that what the POST memory tests are for? I've encountered and responded to ECC errors long before a DIMM actually failed. The ECC thresholds were usually the trigger for warranty placement. Do you burn your RAM in? If so, what method do you use to perform the tests? Has the burn-in process resulted in any additional platform stability? Has it identified any pre-deployment problems?

    Read the article

  • How can I write automated tests for iptables?

    - by Phil Frost
    I am configuring a Linux router with iptables. I want to write acceptance tests for the configuration that assert things like: traffic from some guy on the internet is not forwarded, and TCP to port 80 on the webserver in the DMZ from hosts on the corporate LAN is forwarded. An ancient FAQ alludes to a iptables -C option which allows one to ask something like, "given a packet from X, to Y, on port Z, would it be accepted or dropped?" Although the FAQ suggests it works like this, for iptables (but maybe not ipchains as it uses in the examples) the -C option seems to not simulate a test packet running through all the rules, but rather checks for the existence for an exactly matching rule. This has little value as a test. I want to assert that the rules have the desired effect, not just that they exist. I've considered creating yet more test VMs and a virtual network, then probing with tools like nmap for effects. However, I'm avoiding this solution due to the complexity of creating all those additional virtual machines, which is really quite a heavy way to generate some test traffic. It would also be nice to have an automated testing methodology which can also work on a real server in production. How else might I solve this problem? Is there some mechanism I might use to generate or simulate arbitrary traffic, then know if it was (or would be) dropped or accepted by iptables?

    Read the article

  • Test server on a local network with XAMPP

    - by hopscotch1978
    Hi, I'm not very proficient with networks and could use some help. I've got a Win 7 desktop with XAMPP which acts as my local dev machine. I've configured a virtual host on the desktop which I'm able to access fine. If I'm understanding things correctly, the virtual host uses port 80 (<VirtualHost 127.0.0.1:80>). I've just tried to configure a separate Win XP laptop on the local wireless network to connect to the main desktop for testing purposes. I've added the IP address and virtual host name to my Hosts file on the laptop. My virtual host is imaginatively named "virtualhost1". When I type this into my laptop browser, it connects correctly to the main desktop and I get the XAMPP welcome screen. But I can't seem to get to the actual site, just the XAMPP welcome screen. It kind of jumps the browser to http://virtualhost1/xampp/. I think it's a port issue of some sort but I have no idea how to resolve it. I would get the same XAMPP welcome screen on my desktop if I omitted ":80" from the virtual host declaration. On my main desktop, typing "virtualhost1" to the browser address bar gives me the site correctly, not the XAMPP welcome screen. Help would be appreciated. Thank you.

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Power surge PC damage: How can I test all components of my PC without access to a second computer?

    - by Doug T.
    Ever since we had some crazy power surges last week my 64 bit Windows 7 PC has been acting strange. My USB network adapter disconnects from the wireless and can't detect the signal. I have to disable/reenable the adapter to detect it again. Also my wife has reported that the PC has rebooted a few times while I'm not sitting at it. Today I finally caught the reboot while I was using the PC. I got this blue screen of death. Stop Code 0x00000109: "Modification of system code or a critical data structure was detected." I followed the advice at the linked article and ran a memory test. I used memtest86 and its already found around 300,000 errors out of 8 gigs of ram. Now I'm worried -- what are the odds this is isolated to just my memory and not just a system wide problem? Isn't there a good chance that many other components are fried? More importantly, how can I test those other components? Are there tools similar to memtest I can use to test my motherboard/video card/power supply? If these are vender specific, is it typical for vendors to provide testing tools?

    Read the article

  • Nginx / uWsgi / Django site can handle more traffic with rewrite URL

    - by Ludo
    Hi there. I'm running a Django app, using uWsgi behind Nginx. I've been doing some performance tuning and load testing using ApacheBench and have discovered something unexpected which I wonder if someone could explain for me. In my Nginx config I have a rewrite directive which catches lots of different URL permutations and then forwards them to the canonical URL I wish to use, eg, it traps www.mysite.com/whatever, www.mysite.co.uk/whatever and forwards them all to http://mysite.com/whatever. If I load test against any of the URLs listed with a redirect (ie, NOT the canonical URL which it is eventually forwarded to), it can serve 15000 concurrent connections without breaking a sweat. If I load test against the canonical URL, which the above test I would have expected got forwarded to anyway, it can't handle nearly as much. It will drop about 4000 of the 15000 requests, and can only handle about 9000 reliably. This is the command line I'm using to test: ab -c15000 -n15000 http://www.mysite.com/somepath/ and ab -c15000 -n15000 http://mysite.com/somepath/ I've tried several different types - it makes no different which order I do them in. This doesn't make sense to me - I can understand why the requests involving a redirect may not handle quite so many concurrent connections, but it's happening the other way round. Can anyone explain? I'd really prefer it if the canonical URL was the one which could handle more traffic. I'll post my Nginx config below. Thanks loads for any help! server { server_name www.somesite.com somesite.net www.somesite.net somesite.co.uk www.somesite.co.uk; rewrite ^(.*) http://somesite.com$1 permanent; } server { root /home/django/domains/somesite.com/live/somesite/; server_name somesite.com somesite-live.myserver.somesite.com; access_log /home/django/domains/somesite.com/live/log/nginx.log; location / { uwsgi_pass unix:////tmp/somesite-live.sock; include uwsgi_params; } location /media { try_files $uri $uri/ /index.html; } location /site_media { try_files $uri $uri/ /index.html; } location = /favicon.ico { empty_gif; } }

    Read the article

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • White Paper/Case Study on ICONICS’ Use of StreamInsight for its Energy AnalytiX&#174; Solution

    - by Roman Schindlauer
    A couple of days ago, we released a new StreamInsight white paper/case study on TechNet and MSDN. The paper is joint work with ICONICS and discusses how ICONICS is using StreamInsight technology for its Energy AnalytiX® solution. The paper is available for download here in the Technical Articles section of the StreamInsight documentation. Today, businesses and organizations need to pay more and more attention to energy usage, as customers and the general public are becoming increasingly concerned about a respectful and sustainable use of resources. Organizations therefore need to carefully manage their use of energy and provide better visibility into their energy consumption. In this paper, we discuss how software solutions can help address these challenges. Besides providing some background on the drivers behind energy management, the paper discusses how organizations manage their use of energy with current product and service offerings from Microsoft and ICONICS. In the main body of the paper, a case study explains in depth how ICONICS Energy AnalytiX® is using Microsoft data platform components such as SQL Server StreamInsight to deliver market leading energy management solutions. Regards, The StreamInsight Team

    Read the article

  • White Paper/Case Study on ICONICS’ Use of StreamInsight for its Energy AnalytiX&#174; Solution

    - by Roman Schindlauer
    A couple of days ago, we released a new StreamInsight white paper/case study on TechNet and MSDN. The paper is joint work with ICONICS and discusses how ICONICS is using StreamInsight technology for its Energy AnalytiX® solution. The paper is available for download here in the Technical Articles section of the StreamInsight documentation. Today, businesses and organizations need to pay more and more attention to energy usage, as customers and the general public are becoming increasingly concerned about a respectful and sustainable use of resources. Organizations therefore need to carefully manage their use of energy and provide better visibility into their energy consumption. In this paper, we discuss how software solutions can help address these challenges. Besides providing some background on the drivers behind energy management, the paper discusses how organizations manage their use of energy with current product and service offerings from Microsoft and ICONICS. In the main body of the paper, a case study explains in depth how ICONICS Energy AnalytiX® is using Microsoft data platform components such as SQL Server StreamInsight to deliver market leading energy management solutions. Regards, The StreamInsight Team

    Read the article

  • Bless doesn't fix white boot screen boot delay for single-boot Xubuntu 14.04 on Macbook 4,1

    - by elephant
    I still have a 30-second delay on the white boot-up screen before Xubuntu loads after trying various combinations of bless --device as recommended here: https://help.ubuntu.com/community/MactelSupportTeam/AppleIntelInstallation#Avoid_long_EFI_wait_before_GRUB I wonder if anyone has experienced this before, or can point me to some good steps for troubleshooting this issue? I have cycled my macbook dozens of times, it would be great to be able to boot quicker. I am single-booting Xubuntu 14.04 (no Mac OSX partitions or any other OS, just a GRUB partition at sda1, a main partition at sda2, and a swap at the end of the drive). Suggestions very appreciated.

    Read the article

  • Is there a White / Blank Canvas E-Commerce Platform to Integrate into Existing Site? [closed]

    - by beta208
    Possible Duplicate: Which Ecommerce Script Should I Use? Our website is built we're interested in adding a Store to the site. Essentially, there is a global header, and a global footer, and in between is a white expandable div. We'd like our store to fit between the header and footer (and preferably be 960px wide). Do you know of any store platform built to live between the header/footer for situations like this? We really want a full store, not just paypal buy buttons. We'd like it to have a shop backend (CMS-like) with full tracking, etc. Can be paid or unpaid, and preferably hosted by us, but either might be applicable (if iframe or alternative works securely?). This would need to feature over 100 items. If authorize.net is supported that is a plus.

    Read the article

  • How to fix Ubuntu 12.04.3 boot to black screen full of errors in white text, after upgrading on dell inspiron 1501

    - by Ibuntu
    I am running a Dell Inspiron 1501 I use Linux only. No Microsoft or Apple operating systems (or really anything closed-source). I've only been using Linux for a little over a year but I'm starting to gain a comfortable level of familiarity with the system and terminology. I've been having some issues with Quantel Quetzal and Raring Ringtail, especially with older hardware, so I opted to install Ubuntu 12.04.3 Precise Pangolin on the Inspiron 1501. I checked my MD5 sum after downloading my ISO and all was good. I have in fact used this iso/dvd to install Precise Pangolin successfully on a few other systems (some of which are even older than this laptop). Install goes fine. The wireless card doesn't work out of the box but this is a known issue which is fairly easy to fix. So, first thing I did was open up a terminal and run sudo apt-get update && sudo apt-get upgrade which, part way through, crashed (I assume lightdm and possibly X) and took me to a black screen filled with white lines of text that were either errors or just the ouputs of commands. The reason I say that is because I was unable to gleam any useful information from the output on the screen. I did take a picture however and will post a link. After that, every time I boot the system it goes right to that black screen posting all the error messages or output in white text. I never get a purple Ubuntu splash, so from what I can tell after reading this wiki article: https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen That means that after the kernel is selected, it is unable to correctly implement the settings it needs. If the purple splash never shows, the frame buffer was never set correctly right? This leads me to believe that it could be a kernel issue? The wiki suggested to try and pinpoint the issue by rolling back kernels until I find one that works. Is this my best option? I think I'm going to give it a try anyways and will let everyone know if I am able to solve the issue this way. I have since done a few reinstalls and some trouble-shooting including a couple hours scouring the net for anyone with any kind of similar issue. Most of the issues I could find involved getting a black screen after login and none of them said anything about any information output on this black screen. My reinstalls have taught me that there is no issue updating, but as soon as I run sudo apt-get upgrade my system goes to the black screen and every time I boot it up it does the same thing. The only way to fix is by reinstall. I never get any ability to log in. After a hard power off to the laptop (because I cannot use ctrl+alt+del to reboot) when it boots again it goes to the grub boot menu and I can select between regular boot, recovery mode and the two memtest options. I never tried the memtest options but the other two both lead to the same black screen. Some people having a black/blank screen issue claim to have fixed it by using 12.10 or 13.04 but I believe they were having a different issue where they got a black/blank screen after logging in. I think I will still give these images a try, but mostly figured I would just wait another day or two for 13.10. Other things I figured I would try from the following three articles: After logging in, there's a black screen and my cursor, nothing else! in Ubuntu 12.10 Black Screen on Login After Upgrading to 12.04 I can't get to the login screen include opening a terminal using ctrl+alt+f1 and trying a variety of reseting unity, x settings, lightdm (or switching to gdm); but I doubt this will work or that I will even be able to access a terminal. I'm pretty sure the whole system is stuck after it loads the last line on the black screen. I will try these things and post more information when I have. Hopefully someone has an idea in the meantime and I will keep checking back trying to find a solution. Thank you. Here are 3 different pictures of the error message. I had to take with my phone: http://ubuntuone.com/album/0TBBkxmVajJIQQtoN9mVdN

    Read the article

  • Black screen white cursor and can't boot from live disc that I installed from just yesterday (Ubuntu 12.04)

    - by Luke
    So, I've decided to change to Ubuntu from Windows 7 after reformatting. Install goes smoothly, I've set everything up, and it works for a day. It crashed when I had a full screen video, froze, so I rebooted, and now I can't get past the black screen with flashing white underscore cursor. If I wait a while, I get "Reboot and Select proper Boot device or Insert Boot Media in selected Boot device and press any key" I've tried that, removed any boot options but the DVD drive, even tried my Windows 7 boot CD as well. Nothing boots, so I can't do anything. This is on an Asus A52N laptop, and all I can access now is the BIOS (version K52N 217), as far as I can tell.

    Read the article

  • Bug? Flash of white when changing orientation on iOS Safari [migrated]

    - by Baumr
    What causes the flash of white to the right of a responsive design when changing orientation from portrait to landscape on iOS? Try it on iOS6 Safari: Websites like this don't do it: http://html5boilerplate.com But this one does: http://www.initializr.com Something to do with re-processing (CPU lag) to fit a wider screen? It doesn't happen in Chrome for iOS6... Update: I just removed all img and from my testing site, but it still happens. This seems to happen with a lot of different websites out there. Is it a bug with their code, or a Safari for iOS bug? Others are completely immune to it...

    Read the article

  • Kendo UI Combo Box Reset Value

    - by ciantrius
    I'm using the Kendo UI ComboBoxes in cascade mode to build up a filter that I wish to apply. How do I clear/reset the value of a Kendo UI ComboBox? I've tried: $("#comboBox").data("kendoComboBox").val(''); $("#comboBox").data("kendoComboBox").select(''); $("#comboBox").data("kendoComboBox").select(null); all to no avail. The project is an MVC4 app using the Razor engine and the code is basically the same as the Kendo UI example.

    Read the article

  • Autofac Unit Testing using RegisterControllers()

    - by Kane
    I am having problems using Autofac 2.1.13 and writing my unit tests for my ASP.NET MV2 application. I can't seem to resolve controllers when using the RegisterControllers method. I have tried using the Resolve<() and ControllerBuilder.Current.GetControllerFactory().CreateController() methods but to no avail. I am sure that I've missed something simple here so can anyone assist? This was my first attempt at resolving the HomeController - but does not work. ContainerBuilder builder = new ContainerBuilder(); builder.RegisterControllers(typeof(HomeController).Assembly); IContainer container = builder.Build(); // Throws a Throws a A first chance exception of type 'Autofac.Core.Registration.ComponentNotRegisteredException' occurred in Autofac.dll var homeController = container.Resolve<HomeController>(); Similarly this does not work either. ContainerBuilder builder = new ContainerBuilder(); builder.RegisterControllers(typeof(HomeController).Assembly); IContainer container = builder.Build(); var containerProvider = new ContainerProvider(container); ControllerBuilder.Current.SetControllerFactory(new AutofacControllerFactory(containerProvider)); var request = new Mock<HttpRequestBase>(MockBehavior.Loose); request.Setup(r => r.Path).Returns("Path"); var httpContext = new Mock<HttpContextBase>(MockBehavior.Loose); httpContext.SetupGet(c => c.Request).Returns(request.Object); ControllerBuilder.Current.GetControllerFactory().CreateController(new RequestContext(httpContext.Object, new RouteData()), "home"); Any assistance would be greatly appreciated. I should note if I register my controllers without using the RegisterControllers() method my unit tests work. My question would seem to be limited to specifically using the RegisterControllers() method.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >