Search Results

Search found 13452 results on 539 pages for 'django testing'.

Page 120/539 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Why don't these class attributes register?

    - by slypete
    I have a factory method that generates django form classes like so: def get_indicator_form(indicator, patient): class IndicatorForm(forms.Form): #These don't work! indicator_id = forms.IntegerField(initial=indicator.id, widget=forms.HiddenInput()) patient_id = forms.IntegerField(initial=patient.id, widget=forms.HiddenInput()) def __init__(self, *args, **kwargs): forms.Form.__init__(self, *args, **kwargs) self.indicator = indicator self.patient = patient #These do! setattr(IndicatorForm, 'indicator_id', forms.IntegerField(initial=indicator.id, widget=forms.HiddenInput())) setattr(IndicatorForm, 'patient_id', forms.IntegerField(initial=patient.id, widget=forms.HiddenInput())) for field in indicator.indicatorfield_set.all(): setattr(IndicatorForm, field.name, copy(field.get_field_type())) return type('IndicatorForm', (forms.Form,), dict(IndicatorForm.__dict__)) I'm trying to understand why the top form field declarations don't work, but the setattr method below does work. I'm fairly new to python, so I suspect it's some language feature that I'm misunderstanding. Can you help me understand why the field declarations at the top of the class don't add the fields to the class? In a possibly related note, when these classes are instantiated, instance.media returns nothing even though some fields have widgets with associated media. Thanks, Pete

    Read the article

  • Search over multiple fields

    - by schneck
    Hi there, I think I don't unterstand django-haystack properly: I have a data model containing several fields, and I would to have two of them searched: class UserProfile(models.Model): user = models.ForeignKey(User, unique=True, default=None) twitter_account = models.CharField(max_length=50, blank=False) My search index settings: class UserProfileIndex(SearchIndex): text = CharField(document=True, model_attr='user') twitter_account = CharField(model_attr='twitter_account') def get_queryset(self): """Used when the entire index for model is updated.""" return UserProfile.objects.all() But when I perform a search, only the field "username" is searched; "twitter_account" is ignored. When I select the Searchresults via dbshell, the objects contain the correct values for "user" and "twitter_account", but the result page shows a "no results": {% if query %} <h3>Results</h3> {% for result in page.object_list %} <p> <a href="{{ result.object.get_absolute_url }}">{{ result.object.id }}</a> </p> {% empty %} <p>No results</p> {% endfor %} {% endif %} Any ideas?

    Read the article

  • Cache for everybody except staff members.

    - by Oli
    I have a django site where I want to stick an "admin bar" along the top of every non-admin page for staff members. It would contain useful things like page editing tools, etc. The problem comes from me using the @cache_page decorator on lots of pages. If a normal user hits a page, the cached version comes up without the admin bar (even for admin users) and if an admin hits the page first, normal users see the admin bar. I could tediously step through the templates, adding regional cache blocks but there are a lot of templates, and life is altogether too short. Ideally, there would be a way of telling the caching to ignore cache get/set requests from admin users... But I don't know how to best implement that. How would you tackle this problem?

    Read the article

  • How do I prevent a ManyToManyField('self') from linked an object to itself?

    - by dyve
    Consider this model (simplified for this question): class SecretAgentName(models.Model): name = models.CharField(max_length=100) aliases = ManyToManyField('self') I have three names, "James Bond", "007" and "Jason Bourne". "James Bond" and "007" are aliases of each other. This works exactly like I want it to, except for the fact that every instance can also be an alias of itself. This I want to prevent. So, there can be many SecretAgentNames, all can be aliases of each other as long as "James Bond" does not show up as an alias for "James Bond". Can I prevent this in the model definition? If not, can I prevent it anywhere else, preferably so that the Django Admin understands it?

    Read the article

  • Removing a result from Queryset

    - by Enrico
    Is there a simple way to discard/remove the last result in a queryset without affecting the db? I am trying to paginate results in Django, but don't know the total number of objects for a given query. I was planning on using next/previous or older/newer links, so I only need to know if this is the first and/or last page. First is easy to check. To check for the last page I can compare the number of results with the pagesize or make a second query. The first method fails to detect the last page when the number of results in the last set equals the pagesize (ie 100 records get broken into 10 pages with the last page containing exactly 10 results) and I would like to avoid making a second query. My current thought is that I should fetch pagesize + 1 results from the db. If the queryset length equals 11, I know this is not the last page and I want to discard the last result in the queryset before passing the queryset to the template.

    Read the article

  • Quering distinct values throught related model

    - by matheus.emm
    Hi! I have a simple one-to-many (models.ForeignKey) relationship between two of my model classes: class TeacherAssignment(models.Model): # ... some fields year = models.CharField(max_length=4) class LessonPlan(models.Model): teacher_assignment = models.ForeignKey(TeacherAssignment) # ... other fields I'd like to query my database to get the set of distinct years of TeacherAssignments related to at least one LessonPlan. I'm able to get this set using Django query API if I ignore the relation to LessonPlan: class TeacherAssignment(models.Model): # ... model's fields def get_years(self): year_values = self.objects.all().values_list('year').distinct().order_by('-year') return [yv[0] for yv in year_values if len(yv[0]) == 4] Unfortunately I don't know how to express the condition that the TeacherAssignment must be related to at least one LessonPlan. Any ideas how I'd be able to write the query? Thanks in advance.

    Read the article

  • Passing session data to ModelForm inside of ModelAdmin

    - by theactiveactor
    I'm trying to initialize the form attribute for MyModelAdmin class inside an instance method, as follows: class MyModelAdmin(admin.ModelAdmin): def queryset(self, request): MyModelAdmin.form = MyModelForm(request.user) My goal is to customize the editing form of MyModelForm based on the current session. When I try this however, I keep getting an error (shown below). Is this the proper place to pass session data to ModelForm? If so, then what may be causing this error? TypeError at ... Exception Type: TypeError Exception Value: issubclass() arg 1 must be a class Exception Location: /usr/lib/pymodules/python2.6/django/forms/models.py in new, line 185

    Read the article

  • Is it ok to hardcode dynamic links in a permanent view?

    - by meder
    Let's say I wanted to showcase 2-3 clickable buttons on my homepage which will be there permanently. These are links to the css, html, and javascript tag listing pages. Is it fine to just hardcode href=/tags/css and href=/tags/html right in my django templates/view? I won't change them for at least a year or so, meaning I don't think I need to add a column to the tags table to distinguish them - is this common or should I try to make it somewhat dynamic? These tags are in a table but so are 1000 other tags.

    Read the article

  • Saving related model objects

    - by iHeartDucks
    I have two related models (one to many) in my django app and When I do something like this ObjBlog = Blog() objBlog.name = 'test blog' objEntry1 = Entry() objEntry1.title = 'Entry one' objEntry2 = Entry() objEntry2.title = 'Entry Two' objBlog.entry_set.add(objEntry1) objBlog.entry_set.add(objEntry2) I get an error which says "null value in column and it violates the foreign key not null constraint". None of my model objects have been saved. Do I have to save the "objBlog" before I could set the entries? I was hoping I could call the save method on objBlog to save it all. NOTE: I am not creating a blog engine and this is just an example.

    Read the article

  • Select distinct users with referrals

    - by Mark
    I have a bunch of Users. Since Django doesn't really let me extend the default User model, they each have Profiles. The Profiles have a referred_by field (a FK to User). I'm trying to get a list of Users with = 1 referral. Here's what I've got so far Profile.objects.filter(referred_by__isnull=False).values_list('referred_by', flat=True) Which gives me a list of IDs of the users who have referrals... but I need it to be distinct, and I want the User object, not their ID. Or better yet, it would be nice if it could return the number of referrals a user has. Any ideas?

    Read the article

  • Overwrite queryset which builds filter sidebar

    - by cw
    Hi, I'm writing a hockey database/manager. So I have the following models: class Team(models.Model): name = models.CharField(max_length=60) class Game(models.Model): home_team = models.ForeignKey(Team,related_name='home_team') away_team = models.ForeignKey(Team,related_name='away_team') class SeasonStats(models.Model): team = models.ForeignKey(Team) Ok, so my problem is the following. There are a lot of teams, but Stats are just managed for my Club. So if I use "list_display" in the admin backend, I'd like to modify/overwrite the queryset which builds the sidebar for filtering, to just display our home teams as a filter option. Is this somehow possible in Django? I already made a custom form like this class SeasonPlayerStatsAdminForm(forms.ModelForm): team = forms.ModelChoiceField(Team.objects.filter(club__home=True)) So now just the filtering is missing. Any ideas?

    Read the article

  • Enable export to XML via HTTP on a large number of models with child relations

    - by Vasil
    I've a large number of models (120+) and I would like to let users of my application export all of the data from them in XML format. I looked at django-piston, but I would like to do this with minimum code. Basically I'd like to have something like this: GET /export/applabel/ModelName/ Would stream all instances of ModelName in applabel together with it's tree of related objects . I'd like to do this without writing code for each model. What would be the best way to do this?

    Read the article

  • Errors in Decimal Calcs within def clean method?

    - by allanhenderson
    I'm attempting a few simple calculations in a def clean method following validation (basically spitting out a euro conversion of retrieved uk product price on the fly). I keep getting a TypeError. Full error reads: Cannot convert {'product': , 'invoice': , 'order_discount': Decimal("0.00"), 'order_price': {...}, 'order_adjust': None, 'order_value': None, 'DELETE': False, 'id': 92, 'quantity': 8} to Decimal so I guess django is passing through the entire cleaned_data form to Decimal method. Not sure where I'm going wrong - the code I'm working with is: def clean_order_price(self): cleaned_data = self.cleaned_data data = self.data order_price = cleaned_data.get("order_price") if not order_price: try: existing_price = ProductCostPrice.objects.get(supplier=data['supplier'], product_id=cleaned_data['product'], is_latest=True) except ProductCostPrice.DoesNotExist: existing_price = None if not existing_price: raise forms.ValidationError('No match found, please enter new price') else: if data['invoice_type'] == 1: return existing_price.cost_price_gross elif data['invoice_type'] == 2: exchange = EuroExchangeRate.objects.latest('exchange_date') calc = exchange.exchange_rate * float(existing_price.cost_price_gross) calc = Decimal(str(calc)) return calc return cleaned_data If the invoice is of type 2 (a euro invoice) then the system should grab the latest exchange rate and apply that to the matching UK pound price pulled through to get euro result. Should performing a decimal conversion be a problem within def clean method? Thanks

    Read the article

  • Elegant solution for multiple forms on single page

    - by NFicano
    I'm building a web application (in Django) that will accept a search criteria and display a report - once the user is satisfied with the results, save both the criteria and a reference to these objects back to the database. The problem I'm having is finding an elegant solution for having 2 forms: Display (GET) the results of their criteria. Enter in some descriptions, and save (POST) everything back to the database. I'm leaning towards AJAX for the GET stuff and a POST for the save, but I wanted to make sure there wasn't a more elegant solution first.

    Read the article

  • Indexing a method return (depending on Internationalization)

    - by Hedde
    Consider a django model with an IntegerField with some choices, e.g. COLORS = ( (0, _(u"Blue"), (1, _(u"Red"), (2, _(u"Yellow"), ) class Foo(models.Model): # ...other fields... color = models.PositiveIntegerField(choices=COLOR, verbose_name=_(u"color")) My current (haystack) index: class FooIndex(SearchIndex): text = CharField(document=True, use_template=True) color = CharField(model_attr='color') def prepare_color(self, obj): return obj.get_color_display() site.register(Product, ProductIndex) This obviously only works for keyword "yellow", but not for any (available) translations. Question: What's would be a good way to solve this problem? (indexing method returns based on the active language) What I have tried: I created a function that runs a loop over every available language (from settings) appending any translation to a list, evaluating this against the query, pre search. If any colors are matched it converts them backwards into their numeric representation to evaluate against obj.color, but this feels wrong.

    Read the article

  • Not quite nested inlines?

    - by Lynden Shields
    Not quite sure what to call this, it's not quite nested inlines, but is probably related. I have a 3 level hierarchy of objects, A one-to-many B one-to-many C. Therefore, every C implicitly also belongs to an A. class A(models.Model): stuff = models.CharField("Stuff", max_length=50) class B(models.Model): a = models.ForeignKey(A) class C(models.Model): b = models.ForeignKey(B) I would like all C's that belong to an A to be listed on the admin page for A in an in-line. They do not have to show which B they belong to on the same page. Is this possible or is it the same problem as nested inlines anyway? If it's possible, how do I do it? I'm using django 1.3

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Unit Testing using InternalsVisibleToAttribute requires compiling with /out:filename.ext?

    - by Will Marcouiller
    In my most recent question: Unit Testing Best Practice? / C# InternalsVisibleTo() attribute for VBNET 2.0 while testing?, I was asking about InternalsVisibleToAttribute. I have read the documentation on how to use it, and everything is fine and understood. However, I can't instantiate my class Groupe from my Testing project. I want to be able to instantiate my internal class in my wrapper assembly, from my testing assembly. Any help is appreciated! EDIT #1 Here's the compile-time error I get when I do try to instantiate my type: Erreur 2 'Carra.Exemples.Blocs.ActiveDirectory.Groupe' n'est pas accessible dans ce contexte, car il est 'Private'. C:\Open\Projects\Exemples\Src\Carra.Exemples.Blocs.ActiveDirectory\Carra.Exemples.Blocs.ActiveDirectory.Tests\GroupeTests.vb 9 18 Carra.Exemples.Blocs.ActiveDirectory.Tests (This says that my type is not accessible in this context, because it is private.) But it's Friend (internal)! EDIT #2 Here's a piece of code as suggested for the Groupe class implementing the Public interface IGroupe: #Region "Importations" Imports System.DirectoryServices Imports System.Runtime.CompilerServices #End Region <Assembly: InternalsVisibleTo("Carra.Exemples.Blocs.ActiveDirectory.Tests")> Friend Class Groupe Implements IGroupe #Region "Membres privés" Private _classe As String = "group" Private _domaine As String Private _membres As CustomSet(Of IUtilisateur) Private _groupeNatif As DirectoryEntry #End Region #Region "Constructeurs" Friend Sub New() _membres = New CustomSet(Of IUtilisateur)() _groupeNatif = New DirectoryEntry() End Sub Friend Sub New(ByVal domaine As String) If (String.IsNullOrEmpty(domaine)) Then Throw New ArgumentNullException() _domaine = domaine _membres = New CustomSet(Of IUtilisateur)() _groupeNatif = New DirectoryEntry(domaine) End Sub Friend Sub New(ByVal groupeNatif As DirectoryEntry) _groupeNatif = groupeNatif _domaine = _groupeNatif.Path _membres = New CustomSet(Of IUtilisateur)() End Sub #End Region And the code trying to use it: #Region "Importations" Imports NUnit.Framework Imports Carra.Exemples.Blocs.ActiveDirectory.Tests #End Region <TestFixture()> _ Public Class GroupeTests <Test()> _ Public Sub CreerDefaut() Dim g As Groupe = New Groupe() Assert.IsNotNull(g) Assert.IsInstanceOf(Groupe, g) End Sub End Class EDIT #3 Damn! I have just noticed that I wasn't importing the assembly in my importation region. Nope, didn't solve anything =( Thanks!

    Read the article

  • Any suggestions for good automated web load testing tool?

    - by fmunkert
    What are some good automated tools for load testing (stress testing) web applications, that do not use record and replay of HTTP network packets? I am aware that there are numerous load testing tools on the market that record and replay HTTP network packets. But these are unsuitable for my purpose, because of this: The HTTP packet format changes very often in our application (e.g. when we optimize an AJAX call). We do not want to adapt all test scripts just because there is a slight change in HTTP packet format. Our test team shall not need to know any internals about our application to write their test scripts. A tool that replays HTTP packets, however, requires the team to know the format of HTTP requests and responses, such that they can adapt details of the replayed HTTP packets (e.g. user name). The automated load testing tool I am looking for should be able to let the test team write "black box" test scripts such as: Invoke web page at URL http://... . First, enter XXX into text field XXX. Then, press button XXX. Wait until response has been received from web server. Verify that text field XXX now contains the text XXX. The tool should be able to simulate up to several 1000 users, and it should be compatible with web applications using ASP.NET and AJAX.

    Read the article

  • Don’t just P2V that server for Testing!

    - by Jonathan Kehayias
    If you use virtualization in your company, at some point in time you might be tempted to perform a Physical-To-Virtual conversion, also known as P2V.  The ability to create a complete working copy of a physical server as a virtual machine is really useful for migrating to a virtualized datacenter, but it can also wreak havoc in your environment if you use it to generate a copy of a server for testing and the only change you make is the name of the server.  The Problem: Consider that you...(read more)

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >