Search Results

Search found 11010 results on 441 pages for 'testing strategies'.

Page 37/441 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Off Page SEO Strategies - 5 Top Tips to Improve Yours

    The hard truth of business in the 21st century is that almost regardless of the type of business you do, you'll be doing some of it using the World Wide Web. Whether or not you have fully embraced the Web's potential or are still only a reluctant convert the fact is that you will know that you cannot escape its power over your enterprise.

    Read the article

  • 7 Ethical SEO Strategies

    Back when the concept of the internet was still new, many of its users abused it. They made millions of dollars just scamming people off their precious savings. They posted things that others have made and made themselves look legitimate.

    Read the article

  • Basic SEO Strategies - How to Get Your Website at the Top of the Search Results

    Search Engine Optimisation means making changes on your website so that it can be crawled more easily by search engines and found in search listings. By making changes on your website you can increase targeted visitors to your site. Search Engines use robots to crawl websites and the robots are only able to read content that is text. Your website results are displayed based on how relevant the content is in relation to the keyword that is being searched for in the search engine.

    Read the article

  • SEO Strategies - When All Else Fails - Build Inbound Links

    There are times when you are trying to break ground in a niche with very high competition that you just seem to be getting no where fast. You have tweaked every page element you can think of to maximise the optimisation of every page on your website, you have built your Sitemap and submitted to all search engines and still you have not moved up the ranks or made any progress.

    Read the article

  • Unit Testing Myths and Practices

    We all understand the value of Unit Testing, but how come so few organisations maintain unit tests for their in-house applications? We can no longer pretend that unit testing is a universal panacea for ensuring less-buggy applications. Instead, we should be prepared to actively justify the use of unit tests, and be more savvy about where in the development cycle the unit test resources should be most effectively used.

    Read the article

  • SEO Strategies - When All Else Fails - Build Inbound Links

    There are times when you are trying to break ground in a niche with very high competition that you just seem to be getting no where fast. You have tweaked every page element you can think of to maximise the optimisation of every page on your website, you have built your Sitemap and submitted to all search engines and still you have not moved up the ranks or made any progress.

    Read the article

  • How to Optimize Your Website With SEO Placement Strategies

    The best way to win the SEO optimization game is to use a combination of different techniques. Compelling content, unique keywords and link exchange are methods that have proven to be effective in optimizing your website. The next step is to combine all these into a frequent, if not daily, activity.

    Read the article

  • Prepare the Best SEO Strategies

    SEO or Search Engine optimization is a process or technique followed to rank your website on top of the search engines. In which, developing an SEO Strategy that typically suits a website is most important.

    Read the article

  • Oracle Application Testing Suite 12.1 ??????????

    - by user773457
    Oracle Application Testing Suite 12.1 ??????????? http://japanmediacentre.oracle.com/content/detail.aspx?ReleaseID=1692???????????????? ?JD Edwards EnterpriseOne????????????????????????????????????? ?Oracle Application Testing Suite 12.1????????????????????????(IE9?Linux) ??????????????Oracle Enterprise Manager????????Oracle Cloud???????????????? ???????SQL?????????SELECT???????????????????????????? ??????????????????????????????????? ???? ATS-Tech ??????????

    Read the article

  • Unit test complex classes with many private methods

    - by Simon G
    Hi, I've got a class with one public method and many private methods which are run depending on what parameter are passed to the public method so my code looks something like: public class SomeComplexClass { IRepository _repository; public SomeComplexClass() this(new Repository()) { } public SomeComplexClass(IRepository repository) { _repository = repository; } public List<int> SomeComplexCalcualation(int option) { var list = new List<int>(); if (option == 1) list = CalculateOptionOne(); else if (option == 2) list = CalculateOptionTwo(); else if (option == 3) list = CalculateOptionThree(); else if (option == 4) list = CalculateOptionFour(); else if (option == 5) list = CalculateOptionFive(); return list; } private List<int> CalculateOptionOne() { // Some calculation } private List<int> CalculateOptionTwo() { // Some calculation } private List<int> CalculateOptionThree() { // Some calculation } private List<int> CalculateOptionFour() { // Some calculation } private List<int> CalculateOptionFive() { // Some calculation } } I've thought of a few ways to test this class but all of them seem overly complex or expose the methods more than I would like. The options so far are: Set all the private methods to internal and use [assembly: InternalsVisibleTo()] Separate out all the private methods into a separate class and create an interface. Make all the methods virtual and in my tests create a new class that inherits from this class and override the methods. Are there any other options for testing the above class that would be better that what I've listed? If you would pick one of the ones I've listed can you explain why? Thanks

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • How to change the date/time in Python for all modules?

    - by Felix Schwarz
    When I write with business logic, my code often depends on the current time. For example the algorithm which looks at each unfinished order and checks if an invoice should be sent (which depends on the no of days since the job was ended). In these cases creating an invoice is not triggered by an explicit user action but by a background job. Now this creates a problem for me when it comes to testing: I can test invoice creation itself easily However it is hard to create an order in a test and check that the background job identifies the correct orders at the correct time. So far I found two solutions: In the test setup, calculate the job dates relative to the current date. Downside: The code becomes quite complicated as there are no explicit dates written anymore. Sometimes the business logic is pretty complex for edge cases so it becomes hard to debug due to all these relative dates. I have my own date/time accessor functions which I use throughout my code. In the test I just set a current date and all modules get this date. So I can simulate an order creation in February and check that the invoice is created in April easily. Downside: 3rd party modules do not use this mechanism so it's really hard to integrate+test these. The second approach was way more successful to me after all. Therefore I'm looking for a way to set the time Python's datetime+time modules return. Setting the date is usually enough, I don't need to set the current hour or second (even though this would be nice). Is there such a utility? Is there an (internal) Python API that I can use?

    Read the article

  • What's the use of writing tests matching configuration-like code line by line?

    - by Pascal Van Hecke
    Hi, I have been wondering about the usefulness of writing tests that match code one-by-one. Just an example: in Rails, you can define 7 restful routes in one line in routes.rb using: resources :products BDD/TDD proscribes you test first and then write code. In order to test the full effect of this line, devs come up with macros e.g. for shoulda: http://kconrails.com/2010/01/27/route-testing-with-shoulda-in-ruby-on-rails/ class RoutingTest < ActionController::TestCase # simple should_map_resources :products end I'm not trying to pick on the guy that wrote the macros, this is just an example of a pattern that I see all over Rails. I'm just wondering what the use of it is... in the end you're just duplicating code and the only thing you test is that Rails works. You could as well write a tool that transforms your test macros into actual code... When I ask around, people answer me that: "the tests should document your code, so yes it makes sense to write them, even if it's just one line corresponding to one line" What are your thoughts?

    Read the article

  • How can I split abstract testcases in JUnit?

    - by Willi Schönborn
    I have an abstract testcase "AbstractATest" for an interface "A". It has several test methods (@Test) and one abstract method: protected abstract A unit(); which provides the unit under testing. No i have multiple implementations of "A", e.g. "DefaultA", "ConcurrentA", etc. My problem: The testcase is huge (~1500 loc) and it's growing. So i wanted to split it into multiple testcases. How can organize/structure this in Junit 4 without the need to have a concrete testcase for every implementation and abstract testcase. I want e.g. "AInitializeTest", "AExectueTest" and "AStopTest". Each being abstract and containing multiple tests. But for my concrete "ConcurrentA", i only want to have one concrete testcase "ConcurrentATest". I hope my "problem" is clear. EDIT Looks like my description was not that clear. Is it possible to pass a reference to a test? I know parameterized tests, but these require static methods, which is not applicable to my setup. Subclasses of an abstract testcase decide about the parameter.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >