Search Results

Search found 13452 results on 539 pages for 'django testing'.

Page 26/539 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Storing dynamic fields in Django forms

    - by hekevintran
    Django's form library has a feature of form sets that allow you to process dynamically added forms. For example you would use form sets if your application has a list of bookmarks you could use form sets to process multiple forms that each represent a bookmark. What about if you want to dynamically add a field to a form? An example would be a survey creation page where you can dynamically add an unlimited number of questions. How do you handle this in Django?

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Cookieless Django - Django with no cookies

    - by phoebebright
    As I'm writing a django site from government bodies I'm not going to be able to use cookies. I found this snippet http://djangosnippets.org/snippets/1540/ but it's currently not allowing users to login. Before I start debugging I wondered if anyone else has solved this problem with this snippet or in any other way?

    Read the article

  • Django OpenID django-openid-auth Login Error.

    - by gramware
    I get the following error when attempting to use django-openid-auth OpenID discovery error: No usable OpenID services found for *******@gmail.com I have followed the instructions that come with it, though it seems there is something I am missing. the installation is on my localhost.

    Read the article

  • No Module named django.core

    - by Sirish Kumar
    Hi, I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this? Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core

    Read the article

  • No Module named django.core

    - by Sirish Kumar
    Hi, I have updated to latest Django version 1.0.2 after uninstalling my old Django version.But now when I run django-admin.py I get the following error. How can I resolve this? Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\bin\django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core

    Read the article

  • Django Testing: Faking User Creation

    - by Ygam
    I want to better write this test: def test_profile_created(self): self.client.post(reverse('registration_register'), data={ 'username':'ygam', 'email':'[email protected]', 'password1':'ygam', 'password2':'ygam' }) """ Test if a profile is created on save """ user = User.objects.get(username='ygam') self.assertTrue(UserProfile.objects.filter(user=user).exists()) and I just came upon this code on django-registration tests that does not actually "create" the user: def test_registration_signal(self): def receiver(sender, **kwargs): self.failUnless('user' in kwargs) self.assertEqual(kwargs['user'].username, 'bob') self.failUnless('request' in kwargs) self.failUnless(isinstance(kwargs['request'], WSGIRequest)) received_signals.append(kwargs.get('signal')) received_signals = [] signals.user_registered.connect(receiver, sender=self.backend.__class__) self.backend.register(_mock_request(), username='bob', email='[email protected]', password1='secret') self.assertEqual(len(received_signals), 1) self.assertEqual(received_signals, [signals.user_registered]) However he used a custom function for this "_mock_request": class _MockRequestClient(Client): def request(self, **request): environ = { 'HTTP_COOKIE': self.cookies, 'PATH_INFO': '/', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REQUEST_METHOD': 'GET', 'SCRIPT_NAME': '', 'SERVER_NAME': 'testserver', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'wsgi.version': (1,0), 'wsgi.url_scheme': 'http', 'wsgi.errors': self.errors, 'wsgi.multiprocess':True, 'wsgi.multithread': False, 'wsgi.run_once': False, 'wsgi.input': None, } environ.update(self.defaults) environ.update(request) request = WSGIRequest(environ) # We have to manually add a session since we'll be bypassing # the middleware chain. session_middleware = SessionMiddleware() session_middleware.process_request(request) return request def _mock_request(): return _MockRequestClient().request() However, it may be too long of a function for my needs. I want to be able to somehow "fake" the account creation. I have not much experience on mocks and stubs so any help would do. Thanks!

    Read the article

  • Testing sample code in python modules

    - by Andrew Walker
    I'm in the process of writing a python module that includes some samples. These samples aren't unit-tests, and they are too long and complex to be doctests. I'm interested in best practices for automatically checking that these samples run. My current project layout is pretty standard, except that there is an extra top level makefile that has build, install, unittest, coverage and profile targets, that delegate responsibility to setup.py and nose as required. projectname/ Makefile README setup.py samples/ foo-sample foobar-sample projectname/ __init__.py foo.py bar.py tests/ test-foo.py test-bar.py I've considered adding a sampletest module, or adding nose.tools.istest decorators to the entry-point functions of the samples, but for a small number of samples, these solutions sound a bit ugly. This question is similar to http://stackoverflow.com/questions/301365/automatically-unit-test-example-code, but I assume python best practices will differ from C#

    Read the article

  • What some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

  • Learning about tests for junior programmers

    - by RHaguiuda
    I`m not sure if its okay to ask it on stackoverflow. Ive been reading a log about tests, unit tests, tests frameworks, mocks and so on, but as a junior programmer I dont know anything about tests, not even where to start! Can anyone explain to young programmers about tests, how they`re run, where and what to test, what is unit testing, integration testing, automated tests? How much to test? And more important: how much test is enough? I belive this would be very helpfull. If possible indicate a few books too about these subjects. Thanks

    Read the article

  • Testing complex entities

    - by Carlos
    I've got a C# form, with various controls on it. The form controls an ongoing process, and there are many, many aspects that need to be right for the program to run correctly. Each part can be unit tested (for instance, loading some coefficients, drawing some diagnostics) but I often run into problems that are best described with an example: "If I click here, then here, then change this, then re-open the form, then click here, it crashes or produces an error" I've tried my best to use common code organisational ideas (inheritance, DRY, separation of concerns) but there never seems to be a way to test every single path, and inevitably, a form with several controls will have a huge number of ways to execute. What can I read (preferably online) that addresses this kind of issue, and is there a (non-generic) term for it. This isn't a specific problem I'm having, but one that creeps up on me, especially with WinForms.

    Read the article

  • How to get started with testing(jMock)

    - by London
    Hello, I'm trying to learn how to write tests. I'm also learning Java, I was told I should learn/use/practice jMock, I've found some articles online that help to certain extend like : http://www.theserverside.com/news/1365050/Using-JMock-in-Test-Driven-Development http://jeantessier.com/SoftwareEngineering/Mocking.html#jMock And most articles I found was about test driven development, write tests first then write code to make the test pass. I'm not looking for that at the moment, I'm trying to write tests for already existing code with jMock. The official documentation is vague to say the least and just too hard for me. Does anybody have better way to learn this. Good books/links/tutorials would help me a lot. thank you EDIT - more concrete question : http://jeantessier.com/SoftwareEngineering/Mocking.html#jMock - from this article Tried this to mock this simple class : import java.util.Map; public class Cache { private Map<Integer, String> underlyingStorage; public Cache(Map<Integer, String> underlyingStorage) { this.underlyingStorage = underlyingStorage; } public String get(int key) { return underlyingStorage.get(key); } public void add(int key, String value) { underlyingStorage.put(key, value); } public void remove(int key) { underlyingStorage.remove(key); } public int size() { return underlyingStorage.size(); } public void clear() { underlyingStorage.clear(); } } Here is how I tried to create a test/mock : public class CacheTest extends TestCase { private Mockery context; private Map mockMap; private Cache cache; @Override @Before public void setUp() { context = new Mockery() { { setImposteriser(ClassImposteriser.INSTANCE); } }; mockMap = context.mock(Map.class); cache = new Cache(mockMap); } public void testCache() { context.checking(new Expectations() {{ atLeast(1).of(mockMap).size(); will(returnValue(int.class)); }}); } } It passes the test and basically does nothing, what I wanted is to create a map and check its size, and you know work some variations try to get a grip on this. Understand better trough examples, what else could I test here or any other exercises would help me a lot. tnx

    Read the article

  • Simulating Ajax failures for QA testing

    - by womp
    Our first ASP.Net MVC/jQuery product is about to go to QA, and we're looking for a way for our QA guys to easily be able to simulate bad Ajax requests (without modifying the application code). A typical integration/UI test plan might be: Load page, click button "DoStuff" "DoStuff" fails Attempt button "DoStuff" again "DoStuff" succeeds Verify application state This is a simple test case - there will be cases with multiple failures and successes interspersed. Aside from "unplug your network cable" I'm looking for an easy way for our guys to simulate intermittent bad server responses. I'm open to any ideas so I won't go into too many details about our application setup or dependencies. How have you handled this?

    Read the article

  • JRuby-friendly method for parallel-testing Rails app

    - by Toby Hede
    I am looking for a system to parallelise a large suite of tests in a Ruby on Rails app (using rspec, cucumber) that works using JRuby. Cucumber is actually not too bad, but the full rSpec suite currently takes nearly 20 minutes to run. The systems I can find (hydra, parallel-test) look like they use forking, which isn't the ideal solution for the JRuby environment.

    Read the article

  • Tools for website/web application load testing?

    - by zarko.susnjar
    Before going into production, our client demands actual numbers of how many users our web application can handle. We have all kinds of features implemented including asset management (file uploads/downloads), documents import/export, various statistics, web-services etc. Which tool (or set of tools) could do this? Application details: XHTML/jQuery Coldfusion 8 SQL Server 2008 Windows Server 2008

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • Is there a name for a testing method where you compare a set of very different designs?

    - by DVK
    "A/B testing" is defined as "a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates". The point here, of course, is to know which small single-variable changes are more optimal, with the goal of finding the local optimum. However, one can also envision a somewhat related but different scenario for testing the response rate of major re-designs: take a baseline control design, take one or more completely different designs, and run test samples on those redesigns to compare response rates. As a practical but contrived example, imagine testing a set of designs for the same website, one being minimalist "googly" design, one being cluttered "Amazony" design, and one being an artsy "designy" design (e.g. maximum use of design elements unlike Google but minimal simultaneously presented information, like Google but unlike Amazon) Is there an official name for such testing? It's definitely not A/B testing, since the main component of it (finding local optimum by testing single-variable small changes that can be attributed to response shift) is not present. This is more about trying to compare a set of local optimums, and compare to see which one works better as a global optimum. It's not a multivriable, A/B/N or any other such testing since you don't really have specific variables that can be attributed, just different designs.

    Read the article

  • Unit test SHA256 wrapper queries

    - by Sam Leach
    I am just beginning to write unit tests. So please bear with me. I have the following SHA256 wrapper. public static string SHA256(string plainText) { StringBuilder sb = new StringBuilder(); SHA256CryptoServiceProvider provider = new SHA256CryptoServiceProvider(); var hashedBytes = provider.ComputeHash(Encoding.UTF8.GetBytes(plainText)); for (int i = 0; i < hashedBytes.Length; i++) { sb.Append(hashedBytes[i].ToString("x2").ToLower()); } return sb.ToString(); } Do I want to be testing it? If so, what do you recommend? My thought process is as follows: What logic is there here. The answer is my for loop and ToString("x2") so from my understanding I want to be testing this part? I can assume Encoding.UTF8.GetBytes(plainText) works. Correct assumption? I can assume SHA256CryptoServiceProvider.ComputeHash() works. Correct assumption? I want to be only testing my logic. In this case is limited to the printing of hex encoded hash. Correct? Thanks.

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • Django - need to split a table across multiple locations [closed]

    - by MikeRand
    Hi all, I have a Django project to track our company's restructuring projects. Here's the very simple model: class Project(models.Model): code = models.CharField(max_length=30) description = models.CharField(max_length=60) class Employee(models.Model): project = models.ForeignKey(Project) employee_id = models.IntegerField() country_code = models.CharField(max_length=3) severance = models.IntegerField() Due to regulations in some European countries, I'm not allowed to keep employee-level severance information in a database that sits on a box outside of that country. In Django, how do I manage the need to have my Employee table split across multiple databases based on an Employee attribute (i.e. country_code) in a way that doesn't impact anything else in the project (e.g. views, templates, admin)? Thanks, Mike

    Read the article

  • Use the Django ORM in a standalone script (again)

    - by Rishabh Manocha
    I'm trying to use the Django ORM in some standalone screen scraping scripts. I know this question has been asked before, but I'm unable to figure out a good solution for my particular problem. I have a Django project with defined models. What I would like to do is use these models and the ORM in my scraping script. My directory structure is something like this: project scrape #scraping scripts ... test.py web django_project settings.py ... #Django files I tried doing the following in project/scrape/test.py: print os.path.join(os.path.abspath('..'), 'web', 'django_project') sys.path.append(os.path.join(os.path.abspath('..'), 'web', 'django_project')) print sys.path print "-------" os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project.settings' #print os.environ from django_project.myapp.models import MyModel print MyModel.objects.count() However, I get an ImportError when I try to run test.py: Traceback (most recent call last): File "test.py", line 12, in <module> from django_project.myapp.models import MyModel ImportError: No module named django_project.myapp.models One solution I found around this problem is to create a symbolic link to ../web/govcheck in the scrape folder: :scrape rmanocha$ ln -s ../web/govcheck ./govcheck With this, I can then run test.py just fine. However, this seems like a hack, and more importantly, is not very portable (I will have to create this symbolic link everywhere I run this code). So, I was wondering if anyone has any better solutions for my problem?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >