Search Results

Search found 34350 results on 1374 pages for 'style issue'.

Page 856/1374 | < Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • Snow Leopard Hangs at Login Window

    - by jessecurry
    I've had an issue for the past few months, but I rarely restart so it hasn't caused too much trouble. Basically, when I start up my Mac (iMac10,1 - 3.06GHz Intel Core 2 Duo, OS X 10.6.3) everything proceeds as usual until I reach the login window. The login window displays normally, but keyboard and mouse input seem to be ignored. This condition persists for around 5 minutes at which time everything goes back to normal. While the login window is frozen my second monitor appears entirely blue, the second monitor receives a background as soon as the login window becomes responsive. If I startup while holding SHIFT the problem still occurs, but the freeze is much shorter. Looking through my logs I see no activity during the time that the login window is frozen. I've attempted to repair disk permissions, and gone through every possible maintenance option in Cocktail.

    Read the article

  • How to enable the flash player plugin in IE8?

    - by Nik Reiman
    I updated to IE8 the other day on my Vista laptop, mostly because Windows Update kept bugging me about it. As a Chrome user, I don't really care so much about IE8, but I do have to use it occaisionally to test website compatibility. At any rate, flash seems to be completely deactivated, and I don't see a place to re-enable it anywhere in the preferences. I've done a bit of googling on the issue, but only found information about how to disable flash, not how to actually enable it. I have the flash 10 debug player installed, and it works fine with Chrome and Firefox. What could be the matter?

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • 'Buy the app' landing page implementations

    - by benwad
    My site (using Django) has an app that I'm trying to push - I currently have a piece of middleware that redirects the user to a page advertising the app if they're accessing the page on the iPhone, then setting a cookie so that the user isn't bugged by the message every time they visit the site. This works fine, however checking the page with the mobile Googlebot checker shows that the Googlebot gets stuck in the redirect (since it doesn't store cookies) and therefore won't index the proper content. So, I'm trying to think of an alternative implementation that won't hurt the site's Google ranking and won't have any other adverse effects. I've considered a couple of options: Redirect (the current solution), but don't redirect if the user agent matches the Googlebot's UA string. This would be ideal, however I'm not sure if Google like their bot being treated differently from other users, and I'm afraid the site's ranking may be somehow penalised if I go ahead with this. Use a Javascript popup instead of a redirect. This would make sure the Googlebot finds the content it needs, however I envision this approach causing compatibility issues with the myriad mobile devices/browsers out there, and may affect the page load time. How valid are these options? And is there a better option for implementing this feature out there? I've tried researching this topic but surprisingly can't find any reputable-looking blog posts that explore this topic. EDIT: I posted this on SF because it seemed unsuitable for SO, but if there's another site that would be better for this issue then I'd be happy to move the question elsewhere.

    Read the article

  • Improving Windows Authentication performance on IIS

    - by flalar
    We're struggling with performance issues with a ASP.NET MVC site that is using Windows Authentication. Response time is very slow on the first request to the site when the user is being authenticated. Further, every time the Authorization header is sent from the browser the response time increases with many seconds. The same issue occurs for both executed files and static content like CSS and JS. Access to the application is restricted to users within a certain role and we are now planning to allow access to static files for all authenticated users to see if that helps. The authentication method in use is NTLM. How should we go forward in pinpointing why authentication decreases performance drastically?

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • PHP and MySQL related Problem

    - by Tareq
    Hi friends, I have a local LAN to my office. Recently I designed a New Software system using PHP, MySQL for my office. My boss wants to see the reports from online. My problem is, my network connection is often failed to my office. But I have to input all time. So, now I want to use two instances of my software. One will be using the LAN and one will be uploaded to my server. My question is, how can I easily keep the both databases up-to-date always? Please help me with this issue. If you want more info please feel free to ask me.

    Read the article

  • Random compositing lag

    - by user1020567
    My laptop specs: 512 mb of RAM, out of which 64 mb are shared with an integrated GPU - ATI Radeon Xpress 200 M. Intel 1,6 Ghz Celeron M single-core processor. I've spent months trying to figure out why compositing and effects sometimes lag on any distro I try. Now I've come to realise that no matter what drivers I try (the default ones work for me on pretty much any linux) compositing lag is random. When I used Ubuntu 10.10, for example, sometimes window compositing would lag and sometimes it wouldn't. The PC is able to render those effects so hardware is not the problem. It's completely random and unpredictable - sometimes when I turn on the computer the effects lag horribly and sometimes it's completely smooth. I've also checked startup items and there doesn't seem to be any unnecessary entries. I also tried building my own OS with Arch Linux and the problem persists there, therefore I can only assume that it's a driver issue of some sort. By default there are lots of drivers supplied with linux distributions. Could it be that they're in the way? The ones that I need are ati/radeon (or both? What's the difference between them?) and there seem to be a lot of others... What should I do?

    Read the article

  • Automatically reboot Windows8 if no internet activity

    - by GrapeCamel
    I have a media server located in a very inconvenient part of my house. Occasionally I will have to reset my router or it will reset itself. The issue is the PC loses connectivity for some reason, and I am forced to walk outside, around the house, into the basement, over a bunch of toys and weights and boxes, to push a button to reboot it. I would love to have it check itself every 5-10 minutes and auto reboot if it is unable to ping a given address/IP. Any ideas how to accomplish this?

    Read the article

  • Wireless range extender throughput extremely slow.

    - by Alan B
    I've got a Belkin 54G router connected to the internet, and a Belkin range extender model F5D7132. I can get the range extender connected to the parent router SSID no problem, in repeater mode as opposed to access point mode. My Windows 7 laptop connects to the extender, which has a different SSID, and it connects with the full 5 bars. The issue is that when going through the extender internet performance is murderously slow, even getting the config pages of the extender or router is bad. When I connect directly to the router, all is well.

    Read the article

  • Dell XPS M1330 - power cable pulled by accident and now it won't turn on

    - by jim
    I have a similar problem to what has been posted on this site about when I plug in my Laptop adapter the green light comes on as expected; but, when I plug it into the laptop it goes out. In my case, I know it's not the adapter because I have 2 and they both experience the same issue. I'm quite certain the problem is a short in the laptop. I was using the laptop today and the power cord was pulled out by accident and now i'm into this predicament. How or what do I check on the laptop to isolate the problem? Thanks!

    Read the article

  • How can I redirect all files in a directory that doesn't conform to a certain filename structure?

    - by user18842
    I have a website where a previous developer had updated several webpages. The issue is that the developer had made each new webpage with new filenames, and deleted the old filenames. I've worked with .htaccess redirects for a few months now, and have some understanding of the usage, however, I am stumped with this task. The old pages were named like so: www.domain.tld/subdir/file.html The new pages are named: www.domain.tld/subdir/file-new-name.html The first word of all new files is the exact name of the old file, and all new files have the same last 2 words. www.domain.tld/subdir/file1-new-name.html www.domain.tld/subdir/file2-new-name.html www.domain.tld/subdir/file3-new-name.html ect. We also need to be able to access the url: www.domain.tld/subdir/ The new files have been indexed by google (the old urls cause 404s, and need redirected to the new so that google will be friendly), and the client wants to keep the new filenames as they are more descriptive. I've attempted to redirect it in many different ways without success, but I'll show the one that stumps me the most RewriteBase / RewriteCond %{THE_REQUEST} !^subdir/.*\-new\-name\.html RewriteCond %{THE_REQUEST} !^subdir/$ RewriteRule ^subdir/(.*)\.html$ http://www.domain.tld/subdir/$1\-new\-name\.html [R=301,NC] When visiting www.domain.tld/subdir/file1.html in the browser, this causes a 403 Forbidden error with a url like so: www.domain.tld/subdir/file1-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name.html I'm certain it's probably something simple that I'm overlooking, can someone please help me get a proper redirect? Thanks so much in advance! EDIT I've also got all the old filenames saved on a separate document in case I need them set up like the following example: (file(1|2|3|4|5)|page(1|2|3|4|5)|a(l(l|lowed|ter)|ccept)

    Read the article

  • Unable to change IP address for eth0 without restart in Ubuntu

    - by Rodnower
    I have Ubuntu 12.04.1 installed. I tried to change the IP address of the interface eth0 in /etc/network/interfaces from 192.168.1.3 to 192.168.1.4 auto lo iface lo inet loopback pre-up iptables-restore < /etc/iptables.up.rules auto eth0 iface eth0 inet static address 192.168.1.4 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 sudo service networking status When I issue: sudo service networking restart I get this response: stop: Unknown instance: networking stop/waiting And IP remains 192.168.1.3: eth0 Link encap:Ethernet HWaddr 00:1e:33:71:cd:a4 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe71:cda4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3861 errors:0 dropped:0 overruns:0 frame:0 TX packets:3291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3423285 (3.4 MB) TX bytes:521854 (521.8 KB) Interrupt:45 Base address:0x4000 Only after restart does the IP change. Any ideas?

    Read the article

  • Having extreme issues getting Compiz working on Ubuntu 11.10 (32-bit)

    - by Josh Hornell
    I have been working very hard the past few days to try to get Compiz configured and working correctly but I have been running into a lot of issues. I first installed the CompizConfig Settings Manager and tried different features such as the desktop cube and couldn't get any of them to work. Then I read that I may not have the right graphics card drivers installed (Nvidia GT540m). So I went into the Additional Drivers tool and it shows that 'no proprietary drivers are in use on this system', which struck me as a bit odd as when I very first installed Ubuntu it showed that my Nvidia drivers were installed an active until I downloaded and installed the updates to Ubuntu and since then it's shown empty. I then tried to install my graphics card drivers manually via this article How do I install the latest Nvidia drivers via the Additional Drivers tool?. I rebooted and had no issue although I tried to go back into the CompizConfig Settings Manager and couldn't get anything to work as well as my Additional Drivers tool still showed no drivers installed. I feel like I've tried about everything I can think of and any help would be much appreciated!

    Read the article

  • Migrating Gmail to Office 365

    - by user218699
    Good Morning, I have been setting up Office 365 for my organization. We are currently using Gmail. I have synced our local Active Directory server w/ Office 365, as well as our domains. The problem I am having has to do with migrating mailboxes from Gmail to Office 365. I have been using this article to walk me through the process: http://technet.microsoft.com/en-us/library/dn568114.aspx The issue arises when I begin to sync the mailboxes. Currently I have been trying to sync my own mailbox as a test. The synchronization process has been going on for about 15 hours (for just one mailbox) with no errors or any information given by Office 365, other than the "Syncing" status on the migration page in the Exchange Admin Center. Is syncing a single mailbox supposed to take this long, or have I missed a step? Thanks!

    Read the article

  • svnrepo + trac hosting

    - by Shikhar
    Does anyone know of a good and economic svn + trac hosting site. Specific requirements 1) trac hooks should be in place, which enables commmit messages to be updated in trac issues. 2) It should have emailTotracScript or MailToTracPlugin installed, with which an issue can be reported via email. If its located in Asia pacific it would be great, as time delay from the US is very high. I am already using sourcerepo.com and its very good. Only short coming is they dont have emailtotrac and the time delay is significant. any other inputs would be helpful. TIA

    Read the article

  • Weblogic Class-Path Dependencies EAR

    - by user18287
    I am deploying an EAR in a WebLogic node with many jars defined in the bootstrap (startWeblogicServer.bat) class-path. The problem is that my ear and the bootstrap contain different versions of the same jars, not only that but certain jars contain extracted third party libraries which also differ in version from the WebLogic bootstrap jars causing all kinds of classpath errors. I know you can set preferred jars in the EAR application xml but, this can be very tedious to resolve with regards to jars which include extracted third party libraries in terms of understanding all the dependencies.. Is there a correct approach that i need to be taking here? Am i thinking about this in the wrong way? Any help would be greatly appreciated! So far prefer-web-inf-classes has been recommended but wont work because i'm not deploying a WAR, also prefer-application-packages is what we are currently using but still has the issue described above... Anymore advice out there?? Thanks!

    Read the article

  • Sudden slow read & write speed on all IO

    - by user23392
    I have a custom built rig that has 2 storage drives. for OS: Western Digital 1.0TB HARD DR 64MB for other stuff: Corsair Performance 3 128GB (SSD) [ expected read speed: 400 mb/s ] The system was incredibly fast for a couple of months, then one day i was playing a game then it started to get buggy (some sounds and objects disappearing), i stopped the game and the system seemed to be unstable so i had to shut it down, next morning i couldn't start it up, it was saying something about corrupt device. I formatted both disks and installed a fresh copy of windows, all i can say that since that day the system was never like before, it takes 10 minutes to boot up (the icons and desktop slowly appear). but once it's done the slowness isn't as noticeable. Here's my benchmark on the HDD ( read speed - write speed ): And the SSD: Anyone knows what could be the issue?

    Read the article

  • linux: upload / download difference on network shares

    - by Batsu
    I have a Red Hat Enterprise Linux 6 (with SELinux) which shows significant differences of speed between download and upload (the latter significantly slower) of files shared over the LAN. The bottleneck seems to be the output of the linux machine since I have a rate around 1Mb/s when WinXP machines download files shared (using samba) by the RHEL machine uploading files from the RHEL to a WinXP's shared folder while uploading from the XP machines to linux's shares downloading XPs' shares on the RHEL any share between Windows machines only run smooth (around 50Mb/s). Since the upload from RHEL to WinXP's share is slowed too I would exclude an issue in the configuration of samba. What could possibly determine this limit in the upload speed? update: iptables doesn't show any output rule and disabling it doesn't show any noticeable difference, so I would rule out it too.

    Read the article

  • 2 folders in Sys/Class/Backlight?

    - by zebrapie
    ISSUE: Backlight brightness does not change. More Detail: Brightness will not change, using both 'System Settings-Screen', or FN keys (Brightness bar shows and moves, but screen brightness does not change). Notcied a post in this thread (http://ubuntuforums.org/showthread.php?t=1866283) about having multiple folders in Sys-Class-Backlight... I HAVE TWO FOLDERS TOO! 'intel_backlight' and 'acpi_video0' Using the function keys, alters the value in the acpi_video0's 'Brightness' file - But doesn't actually alter the brightness of the screen. If I add 'backlight=vendor' in Grub, my function keys then edit the value in the 'Intel_Backlight brightness file. - But again doesnt actually change the brightness of the screen. Computer: Fujitsu Siemans Pi2515, Intel Integrated Graphics, No hdd partition. Already Tried: -Editing grub to contain: acpi_osi=Linux acpi_backlight=vendor -http://ubuntuguide.net/change-screen-brightness-with-fn-key-in-ubuntu-11-0410-10 -sudo apt-get install acpi -$ sudo setpci -s 00:02.0 F4.B=20 -Brightness does not adjust in fallback mode either. -Reinstalling OS, Using Linux Mint (Same problem). -Upgrading and downgrading BIOS. Many thanks for reading, I understand this problem may need a bit of a Linux pro to sort. If anyones up for the challenge i'll spend any amount of time being walked through this, posting results. Don't want to give up here!

    Read the article

  • How to write comments to explain the "why" behind the callback function when the function and parameter names are insufficient for that?

    - by snowmantw
    How should I approach writing comments for callback functions? I want to explain the "why" behind the function when the function and parameter names are insufficient to explain what's going on. I have always wonder why comments like this can be so ordinary in documents of libraries in dynamic languages: /** * cb: callback // where's the arguments & effects? */ func foo( cb ) Maybe the common attitude is "you can look into source code on your own after all" which pushes people into leaving minimalist comments like this. But it seems like there should be a better way to comment callback functions. I've tried to comment callbacks in Haskell way: /** * cb: Int -> Char */ func foo(cb) And to be fair, it's usually neat enough. But it gets into trouble when I need to pass some complex structure. The problem being partly due to the lack of type system: /** * cb: Int -> { err: String -> (), success: () -> Char } // too long... */ func foo(cb) Or I have tried this too: /** * cb: Int -> { err: String -> (), * success: () -> Char } // better ? */ func bar(cb) The problem is that you may put the structure in somewhere else, but you must give it a name to reference it. But then when you name a structure you're about to use immediately looks so redundant: // Somewhere else... // ResultCallback: { err: String -> (), success: () -> Char } /** * cb: Int -> ResultCallback // better ?? */ func foo(cb) And it bothers me if I follow the Java-doc like commenting style since it still seems incomplete. The comments don't tell you anything that you couldn't immediately see from looking at the function. /** * @param cb {Function} yeah, it's a function, but you told me nothing about it... * @param err {Function} where should I put this callback's argument ?? * Not to mention the err's own arguments... */ func foo(cb) These examples are JavaScript like with generic functions and parameter names, but I've encountered similar problems in other dynamic languages which allow complex callbacks.

    Read the article

  • Is SEO affected negatively by having densely encoded identifiers of content in URLs?

    - by casperOne
    This isn't about where to put the id of a piece of unique content in URLs, but more about densely packing the URL (or, does it just not matter). Take for example, a hypothetical post in a blog: http://tempuri.org/123456789/seo-friendly-title The ID that uniquely identifies this is 123456789. This corresponds to a look-up and is the direct key in the underlying data store. However, I could encode that in say, hexadecimal, like so: http://tempuri.org/75bcd15/seo-friendly-title And that would be shorter. One could take it even further and have more compact encodings; since URLs are case sensitive, one could imagine an encoding that uses numbers, lowercase and uppercase letters, for a base of 62 (26 upper case + 26 lower case + 10 digits): 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz For a resulting URL of: http://tempuri.org/8M0kX/seo-friendly-title The question is, does densely packing the ID of the content (the requirement is that an ID is mandatory for look-ups) have a negative impact on SEO (and dare I ask, might it have any positive impact), or is it just not worth the time? Note that this is not for a URL shortening service, so saving space in the URL for browser limitation purposes is not an issue.

    Read the article

  • OOF (Out of Office) is not working for remote users (Outlook Anywhere)

    - by Doughecka
    I'm not sure how long this issue has been happening, but recently a few of the remote sales users were going to a sales meeting and wanted to set their Out of Office... however in Outlook 2010, they get this error message: "Your automatic reply settings cannot be displayed because the server is currently unavailable" When I run the Exchange Remote Connectivity Analyzer, Autodiscover completes fine, but the next step fails: Exception details: Message: The request failed. The remote server returned an error: (403) Forbidden. Type: Microsoft.Exchange.WebServices.Data.ServiceRequestException Stack trace: at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request) at Microsoft.Exchange.WebServices.Data.MultiResponseServiceRequest`1.Execute() at Microsoft.Exchange.WebServices.Data.ExchangeService.BindToFolder[TFolder](FolderId folderId, PropertySet propertySet) at Microsoft.Exchange.Tools.ExRca.Tests.EnsureEmptyFolderTest.PerformTestReally() Exception details: Message: The remote server returned an error: (403) Forbidden. Type: System.Net.WebException Stack trace: at System.Net.HttpWebRequest.GetResponse() at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.GetResponse() at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request) I've done some research, but I have yet to find a working fix for this... it seems like some permissions are messed up in IIS, but I haven't figured out what.

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance.

    Read the article

< Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >