Search Results

Search found 7122 results on 285 pages for 'continuous forms'.

Page 4/285 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Integration tests in Continuous Integration environment: Database and filesystem state

    - by dario_ramos
    I'm trying to implement automated integration tests for my application. It's a very complex monster. You could say that its database and part of the filesystem are part of its state, because it saves image files in the hard drive, and references to those in the DB. The software needs all those, in a coherent state, to work properly. Back to writing tests: To run any relevant test, I need some image files in the filesystem, and certain records filled in the database. I thought of putting all of these in a separate folder called TestEnvironmentData in the repository, and retrieving them from the Continuous Integration Server (Team City), but a colleague said the repo is quite full as it is, and that I should set up a special directory, and databases, only in the Continuous Integration server. I don't like that because the tests success depend on me manually mantaining stuff in the server, and restoring initial state before every test becomes cumbersome. What do you guys do when you need to write integration tests for an app like this? The main goal is having an automated test harness to approach a large scale refactoring. There's lots of spaghetti code and the app's current architecture is hardly unit testable, that's why I decided on integration tests first. Any alternative approach is welcome.

    Read the article

  • django: _init_ def work but does not update to class in django form

    - by tgngo
    Hi expert there, this is my form: class IPTrackerSearchForm(forms.Form): keyword = forms.CharField(max_length=100, widget=forms.TextInput(attrs={'size':'50'})) search_in = forms.ChoiceField(required=False, choices=ANY_CHOICE + MODULE_SEARCH_IN_CHOICES) product = forms.CharField(max_length=64,widget=forms.TextInput(attrs={'size':'50'})) family = forms.CharField(max_length=64,widget=forms.TextInput(attrs={'size':'50'})) temp_result = Merlin.objects.values('build').distinct() result = [(value['build'], value['build']) for value in temp_result] build = forms.ChoiceField(choices=ANY_CHOICE + result) circuit_name = forms.CharField(max_length=256,widget=forms.TextInput(attrs={'size':'50'})) parameterization = forms.CharField(max_length=1024,widget=forms.TextInput(attrs={'size':'50'})) metric = forms.CharField(max_length=64,widget=forms.TextInput(attrs={'size':'50'})) show_in_one_page = forms.BooleanField(required=False, label="Show filtered result in one page", widget=forms.CheckboxInput(attrs={'class':'checkbox'})) def __init__(self, *args, **kwargs): super(IPTrackerSearchForm, self).__init__(*args, **kwargs) temp_result = Merlin.objects.values('build').distinct() self.result = [(value['build'], value['build']) for value in temp_result] self.build = forms.ChoiceField(choices=ANY_CHOICE + self.result) print self.result With the purpose that, each time I refresh the webpage, when have new record to "build" column in database. It should update to the drop down box "build" here but It never update unless restart the server. I use print and see that ini detect new recrd but can notrefect to build in Class. Many thanks

    Read the article

  • What’s the ROI of Continuous Integration?

    - by Liggy
    Currently, our organization does not practice Continuous Integration. In order for us to get an CI server up and running, I will need to produce a document demonstrating the return on the investment. Aside from cost savings by finding and fixing bugs early, I'm curious about other benefits/savings that I could stick into this document.

    Read the article

  • What's the workflow of Continuous Integration With Hudson?

    - by Satoru.Logic
    Hi, all. I am referred to Hudson today. I have heard about continuous integration before, but I have no idea what the heck is a ci-server. Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it. But I don't quite understand the workflow of a ci-server, or how am I supposed to use it? Please tell me if you have experience about ci, thanks in advance.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • django image upload forms

    - by gramware
    I am having problems with django forms and image uploads. I have googled, read the documentations and even questions ere, but cant figure out the issue. Here are my files my models class UserProfile(User): """user with app settings. """ DESIGNATION_CHOICES=( ('ADM', 'Administrator'), ('OFF', 'Club Official'), ('MEM', 'Ordinary Member'), ) onames = models.CharField(max_length=30, blank=True) phoneNumber = models.CharField(max_length=15) regNo = models.CharField(max_length=15) designation = models.CharField(max_length=3,choices=DESIGNATION_CHOICES) image = models.ImageField(max_length=100,upload_to='photos/%Y/%m/%d', blank=True, null=True) course = models.CharField(max_length=30, blank=True, null=True) timezone = models.CharField(max_length=50, default='Africa/Nairobi') smsCom = models.BooleanField() mailCom = models.BooleanField() fbCom = models.BooleanField() objects = UserManager() #def __unicode__(self): # return '%s %s ' % (User.Username, User.is_staff) def get_absolute_url(self): return u'%s%s/%s' % (settings.MEDIA_URL, settings.ATTACHMENT_FOLDER, self.id) def get_download_url(self): return u'%s%s/%s' % (settings.MEDIA_URL, settings.ATTACHMENT_FOLDER, self.name) ... class reports(models.Model): repID = models.AutoField(primary_key=True) repSubject = models.CharField(max_length=100) repRecepients = models.ManyToManyField(UserProfile) repPoster = models.ForeignKey(UserProfile,related_name='repposter') repDescription = models.TextField() repPubAccess = models.BooleanField() repDate = models.DateField() report = models.FileField(max_length=200,upload_to='files/%Y/%m/%d' ) deleted = models.BooleanField() def __unicode__(self): return u'%s ' % (self.repSubject) my forms from django import forms from django.http import HttpResponse from cms.models import * from django.contrib.sessions.models import Session from django.forms.extras.widgets import SelectDateWidget class UserProfileForm(forms.ModelForm): class Meta: model= UserProfile exclude = ('designation','password','is_staff', 'is_active','is_superuser','last_login','date_joined','user_permissions','groups') ... class reportsForm(forms.ModelForm): repPoster = forms.ModelChoiceField(queryset=UserProfile.objects.all(), widget=forms.HiddenInput()) repDescription = forms.CharField(widget=forms.Textarea(attrs={'cols':'50', 'rows':'5'}),label='Enter Report Description here') repDate = forms.DateField(widget=SelectDateWidget()) class Meta: model = reports exclude = ('deleted') my views @login_required def reports_media(request): user = UserProfile.objects.get(pk=request.session['_auth_user_id']) if request.user.is_staff== True: repmedform = reportsForm(request.POST, request.FILES) if repmedform.is_valid(): repmedform.save() repmedform = reportsForm(initial = {'repPoster':user.id,}) else: repmedform = reportsForm(initial = {'repPoster':user.id,}) return render_to_response('staffrepmedia.html', {'repfrm':repmedform, 'rep_media': reports.objects.all()}) else: return render_to_response('reports_&_media.html', {'rep_media': reports.objects.all()}) ... @login_required def settingchng(request): user = UserProfile.objects.get(pk=request.session['_auth_user_id']) form = UserProfileForm(instance = user) if request.method == 'POST': form = UserProfileForm(request.POST, request.FILES, instance = user) if form.is_valid(): form.save() return HttpResponseRedirect('/settings/') else: form = UserProfileForm(instance = user) if request.user.is_staff== True: return render_to_response('staffsettingschange.html', {'form': form}) else: return render_to_response('settingschange.html', {'form': form}) ... @login_required def useradd(request): if request.method == 'POST': form = UserAddForm(request.POST,request.FILES ) if form.is_valid(): password = request.POST['password'] request.POST['password'] = set_password(password) form.save() else: form = UserAddForm() return render_to_response('staffadduser.html', {'form':form}) Example of my templates {% if form.errors %} <ol> {% for field in form %} <H3 class="title"> <p class="error"> {% if field.errors %}<li>{{ field.errors|striptags }}</li>{% endif %}</p> </H3> {% endfor %} </ol> {% endif %} <form method="post" id="form" action="" enctype="multipart/form-data" class="infotabs accfrm"> {{ repfrm.as_p }} <input type="submit" value="Submit" /> </form>

    Read the article

  • Continuous integration with ClearCase and long-updating snapshot views

    - by Yulia Rogovaya
    Hi, I need to set up a continuous integration system. We use ClearCase version control and only snapshot views due to platform restrictions. I have tried setting up Hudson and Luntbuild. They both show the same behaviour. In a view, we have lots of libraries that are used for build but are strictly read-only. The CI system executes cleartool lshistory and finds a change in the VCS. After that, it executes cleartool setcs, which causes update of the view. This can take about half an hour, which is very undesirable for CI. Why wouldn't it update only the changed elements, which were previously obtained by cleartool lshistory? Is there a CI system that can do this?

    Read the article

  • How to share code with continuous integration

    - by alchemical
    I've just started working in a continuous integration environment (TeamCity). I understand the basic idea of not getting so abstracted out in your code that you are never able to build it to test functionality, etc. However, when there is deep coding going on, occasionally it will take me several days to get buildable code--but in the interim other team members may need to see my code. If I check the code in, it breaks the build. However, if I don't check it in, my team members are unable to see the most recent work. I'm wondering how this situation is best dealt with.

    Read the article

  • Continuous Deployment with a C#/ASP.NET website?

    - by Amber Shah
    I have a website in C#/ASP.NET that is currently in development. When we are in production, I would like to do releases frequently over the course of the day, as we fix bugs and add features (like this: http://toni.org/2010/05/19/in-praise-of-continuous-deployment-the-wordpress-com-story/). If you upload a new version of the site or even change a single file, it kicks out the users that are currently logged in and makes them start over any forms and such. Is there a secret to being able to do deployments without interfering with users for .NET sites?

    Read the article

  • Continuous Deployment with an ASP.NET website?

    - by Amber Shah
    I have a website in C#/ASP.NET that is currently in development. When we are in production, I would like to do releases frequently over the course of the day, as we fix bugs and add features (like this: http://toni.org/2010/05/19/in-praise-of-continuous-deployment-the-wordpress-com-story/). If you upload a new version of the site or even change a single file, it kicks out the users that are currently logged in and makes them start over any forms and such. Is there a secret to being able to do deployments without interfering with users for .NET sites?

    Read the article

  • Incremental build with continuous integration server

    - by altern
    Does any of the continuous integration servers support incremental builds or filtering mechanism? For example, I want to configure some kind of filtering (as I call it) so that committing file to the specific folder will not cause full (clean) build triggering, but will cause only incremental build. By 'incremental build' I mean process that will put only committed files to the required place and all application would not need to be rebuilt from scratch. Working with images is good example of the case when we need such filtering and thus incremental builds: why do we need to rebuild whole application if only images have been changed? What we need to do is just place images to the dedicated place on server.

    Read the article

  • OFM 11g: Implementing OAM SSO with Forms

    - by olaf.heimburger
    There is some confusion about the integration of OFM 11g Forms with Oracle Access Manager 11g (OAM). Some say this does not work, some say it works, but.... Actually, having implemented it many times I belong to the later group. Here is how. Caveat Before you start installing anything, take a step back and consider your current implementation and what you really need and want to achieve. The current integration of Forms 11g with OAM 11g does not support self-service account creation and password resets from the Forms application. If you really need this, you must use the existing Oracle AS 10.1.4.3 infrastructure. On the other hand, if your user population is pretty stable, you can enjoy the latest Forms 11g with OAM 11g. Assumptions The whole process should be done in one day. I assume that all domains and instances are started during setup, if you need to restart them on demand or purpose, be sure to have proper start/stop scripts, I don't mention them. Preparation It goes without saying, that you always should do a proper backup before you change anything on your production environment. With proper backup, I also mean a tested and verified restore process. If you dared to test it before, do it now. It pays off. Requirements For OAM 11g to work properly you need a LDAP repository. For the integration of Forms 11g you need an Oracle Internet Directory (OID) configured with the Oracle AS SSO LDAP extensions. For better support I usually give the latest version a try, in this case OID 11g is a good choice.During the Installation and Integration steps we use an upgrade wizard that needs the old OID configuration on the same host but in a different ORACLE_HOME. Installation vs Configuration With OFM 11g Oracle introduced a clear separation between Installation of the binaries (the software) and the Configuration of the instances (the runtime). This is really great as you can install all the software and create new instances when needed. In the following we adhere to this scheme and install the software first and then configure the instances later. Installation Steps The Oracle documentation contains all the necessary steps for the installation of all pieces of software. But some hints help to avoid traps and pitfalls. Step 1 The Database Start the installation with the database. It is quite obvious but we need an Oracle database for all the other steps. If you have one at hand, fine. If not, just install at least a Oracle 10.2.0.4 version. This database can be on a different host. Step 2 The Repository Creation Utility The next step should be to run the Repository Creation Utility (RCU). This is a client application that just needs to connect to your database. It can be run on any host that can reach the database and is a Windows or Linux 32-bit machine. When you run it, be sure to install the OID schema and the OAM schema. If you miss one of these, you can run the RCU again to install the missing schema. Step 3 The Foundation With OFM 11g Oracle started to use WebLogic Server 11g (WLS) as its foundation for all OFM 11g installation. We therefore install it first. Depending on your operating system, it might be possible, that no native installer is available. My approach to this dilemma is to use the WLS Generic Installer for all my installations. It does not include a JDK either but if you have both for your platform you are ready to go. Step 3a The JDK To make things interesting, Oracle currently has two JDKs in its portfolio. The Sun JDK and the JRockit JDK. Both are available for a number of platforms. If you are lucky and both are available for your platform, install both in a separate directory (and not one of your ORACLE_HOMEs) each, You can use the later as you like. Step 3b Install WLS for OID and OAM With the JDK installed, we start the generic installer with java -jar wls_generic.jar.STOP! Before you do this, check the version first. It should be 1.6.0_18 or later and not the GCC one (Some Linux distros have it installed by default). To verify the version, issue a java -version command and make sure that the output does not contain the text gcj and the version matches. If this does not work, use an absolute path like /opt/java/jdk1.6.0_23/bin/java to start the installer. The installer allows you to specify a path to install the software into, say /opt/oracle/iam/11.1.1.3 for the OID and OAM installation. We will call this IAM_HOME. Step 4 Install OID Now we are ready to install OID. Start the OID installer (in the Disk1 directory) and just select the installation only step. This will install the software only and does not configure the instance. Use the IAM_HOME as the target directory. Step 5 Install SOA Suite The IAM 11g Suite uses the BPEL component of the SOA Suite 11g for its workflows. This is a pretty closed environment and not to be used for SCA Composites. We install the SOA Suite in $IAM_HOME/soa. The installer only installs the binaries. Configuration will be done later. Step 6 Install OAM Once the installation of OID and SOA is done, we are ready to install the OAM software in the same IAM_HOME. Make sure to install the OAM binaries in a directory different from the one you used during the OID and SOA installation. As before, we only install the software, the instance will be created later. Step 7 Backup the Installation At this point, I normally do a backup (or snapshot in a virtual image) of the installation. Good when you need to go back to this point. Step 8 Configure OID The software is installed and now we need instances to run it. This process is called configuration. For OID use the config.sh found in $IAM_HOME/oid/bin to start the configuration wizard. Normally this runs smoothly. If you encounter some issues check the Oracle Support site for help. This configuration will also start the OID instance. Step 9 Install the Oracle AS SSO Schema Before we install the Forms software we need to install the Oracle AS SSO Schema into the database and OID. This is a rather dangerous procedure, but fully documented in the IAM Installation Guide, Chapter 10. You should finish this in one go, do not reboot your host during the whole procedure. As a precaution, you should make a backup of the OID instance before you start the procedure. Once the backup is ready, read the chapter, including every note, carefully. You can avoid a number of issues by following all the steps and will succeed with a working solution. Step 10 Configure OAM Reached this step? Great. You are ready to create an OAM instance. Use the $IAM_HOME/iam/common/binconfig.sh for this. This will open the WLS Domain Creation Wizard and asks for the libraries to be installed. You should at least select the OAM with Database repository item. The configuration will also start the OAM instance. Step 11 Install WLS for Forms 11g It is quite tempting to install everything in one ORACLE_HOME. Unfortunately this does not work for all OFM packages. Therefore we do another WLS installation in another ORACLE_HOME. The same considerations as in step 3b apply. We call this one FORMS_HOME. Step 12 Install Forms In the FORMS_HOME we now install the binaries for the Forms 11g software. Again, this is a install only step. Configuration starts with the next step. Step 13 Configure Forms To configure Forms 11g we start the Configuration Wizard (config.sh) in FORMS_HOME/bin. This wizard should create a new WebLogic Domain and an OHS instance! Do not extend existing domains or instances! Forms should run in its own instances! When all information is supplied, the wizard will create the domain and instance and starts them automatically.Step 14 Setup your Forms SSO EnvironmentOnce you have implemented and tested your Forms 11g instance, you can configured it for SSO. Yes, this requires the old Oracle AS SSO solution, OIDDAS for creating and assigning users and SSO to setup your partner applications. In this step you should consider to create every user necessary for use within the environment. When done, do not forget to test it. Step 15 Migrate the SSO Repository Since the final goal is to get rid of the old SSO implementation we need to migrate the old SSO repository into the new OID structure. Additionally, this step will also migrate all partner application configurations into OAM 11g. Quite convenient. To do this step, you have to start the upgrade agent (ua or ua.bat or ua.cmd) on the operating system level in $IAM_HOME/bin. Once finished, this wizard will create new osso.conf files for each partner application in $IAM_HOME/upgrade/temp/oam/.Note: At the time of this writing, this step only works if everything is on the same host (ie. OID, OAM, etc.). This restriction might be lifted in later releases. Step 16 Change your OHS sso.conf and shut down OC4J_SECURITY In Step 14 we verified that SSO for our Forms environment works fine. Now, we are shutting the old system done and reconfigure the OHS that acts as the Forms entry point. First we go to the OHS configuration directory and rename the old osso.conf  to osso.conf.10g. Now we change the moduleconf/mod_osso.conf  to point to the new osso.conf file. Copy the new osso.conf  file from $IAM_HOME/upgrade/temp/oam/ to the OHS configuration directory. Restart OHS, test forms by using the same forms links. OAM should now kick in and show the login dialog to ask for your user credentials.Done. Now your Forms environment is successfully integrated with OAM 11g.Enjoy. What's Next? This rather lengthy setup is just the foundation for your growing environment of OAM 11g protections. In the next entry we will show that Forms 11g and ADF Faces 11g can use the same OAM installation and provide real single sign-on. References Nearly everything is documented. Use the documentation! Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1 Oracle® Fusion Middleware Installation Guide for Oracle Identity Management 11gR1, Chapter 11-14 Oracle® Fusion Middleware Administrator's Guide for Oracle Access Manager 11gR1, Appendix B Oracle® Fusion Middleware Upgrade Guide for Oracle Identity Management 11gR1, Chapter 10   

    Read the article

  • "Pretty" Continuous Integration for Python

    - by dbr
    This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at.. For example, compared to.. phpUnderControl Hudson CruiseControl.rb ..and others, BuildBot looks rather.. archaic I'm currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info) Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiney graphs and the likes? Update: After trying a few alternatives, I think I'll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it. Setting Hudson up for a Python project was pretty simple: Download Hudson from https://hudson.dev.java.net/ Run it with java -jar hudson.war Open the web interface on the default address of http://localhost:8080 Go to Manage Hudson, Plugins, click "Update" or similar Install the Git plugin (I had to set the git path in the Hudson global preferences) Create a new project, enter the repository, SCM polling intervals and so on Install nosetests via easy_install if it's not already In the a build step, add nosetests --with-xunit --verbose Check "Publish JUnit test result report" and set "Test report XMLs" to **/nosetests.xml That's all that's required. You can setup email notifications, and the plugins are worth a look. A few I'm currently using for Python projects: SLOCCount plugin to count lines of code (and graph it!) - you need to install sloccount separately Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build) Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)

    Read the article

  • Drawing a continuous rectangle

    - by JAYMIN
    Hi..i m currently working on visual c++ 2008 express edition.. in my project i have a picturebox which contains an image, now i have to draw a rectangle to enaable the user to select a part of the image.. I have used the "MouseDown" event of the picturebox and the below code to draw a rectangle: Void pictureBox1_MouseDown(System::Object^ sender, Windows::Forms::MouseEventArgs^ e) { Graphics^ g = pictureBox1-CreateGraphics(); Pen^ pen = gcnew Pen(Color::Blue); g-DrawRectangle( pen , e-X ,e-Y,width,ht); } now in the "DrawRectangle" the arguments "width" and "ht" are static, so the above code results in the drawing of a rectangle at the point where the user presses the mouse button on the image in picturebox... i want to allow the user to be able to drag the cursor and draw a rectangle of size that he wishes to.. Plz help me on this.. Thanx..

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • At which point is a continuous integration server interesting?

    - by Cedric Martin
    I've been reading a bit about CI servers like Jenkins and I'm wondering: at which point is it useful? Because surely for a tiny project where you'd have only 5 classes and 10 unit tests, there's no real need. Here we've got about 1500 unit tests and they pass (on old Core 2 Duo workstations) in about 90 seconds (because they're really testing "units" and hence are very fast). The rule we have is that we cannot commit code when a test fail. So each developers launches all his tests to prevent regression. Obviously, because all the developers always launch all the test we catch errors due to conflicting changes as soon as one developer pulls the change of another (when any). It's still not very clear to me: should I set up a CI server like Jenkins? What would it bring? Is it just useful for the speed gain? (not an issue in our case) Is it useful because old builds can be recreated? (but we can do this to with Mercurial, by checking out old revs) Basically I understand it can be useful but I fail to see exactly why. Any explanation taking into account the points I raised above would be most welcome.

    Read the article

  • Continuous Form, how to add/update records with external connection

    - by Mohgeroth
    EDIT After some more research I found that I cannot use a continuous form with an unbound form since it can only reference a single record at a time. Given that I've altered my question... I have a sample form that pulls out data to enter into a table as an intermediary. Initially the form is unbound and I open connections to two main recordsets. I set the listbox's recordset equal to one of them and the forms recordset equal to the other. The problem is that I cannot add records or update existing ones. Attempting to key into the fields does nothing almost as if the field was locked (Which it is not). Settings of the recordsets are OpenKeyset and LockPessimistic. Tables are not linked, they come from an outside access database seperate from this project and must remain that way. I am using an adodb connection to get the data. Could the separation of the data from the project be causing this? Sample Code from the Form Option Compare Database Option Explicit Private conn As CRobbers_Connections Private exception As CError_Trapping Private mClient_Translations As ADODB.Recordset Private mUnmatched_Clients As ADODB.Recordset Private mExcluded_Clients As ADODB.Recordset //Construction Private Sub Form_Open(Cancel As Integer) Set conn = New CRobbers_Connections Set exception = New CError_Trapping Set mClient_Translations = New ADODB.Recordset Set mUnmatched_Clients = New ADODB.Recordset Set mExcluded_Clients = New ADODB.Recordset mClient_Translations.Open "SELECT * FROM Client_Translation" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic mUnmatched_Clients.Open "SELECT DISTINCT(a.Client) as Client" _ & " FROM Master_Projections a " _ & " WHERE Client NOT IN ( " _ & " SELECT DISTINCT ClientID " _ & " FROM Client_Translation);" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic mExcluded_Clients.Open "SELECT * FROM Clients_Excluded" _ , conn.RBRS_Conn, adOpenKeyset, adLockPessimistic End Sub //Add new record to the client translations Private Sub cmdAddNew_Click() If lstUnconfirmed <> "" Then AddRecord End If End Sub Private Function AddRecord() With mClient_Translations .AddNew .Fields("ClientID") = Me.lstUnconfirmed .Fields("ClientAbbr") = Me.txtTmpShort .Fields("ClientName") = Me.txtTmpLong .Update End With UpdateRecords End Function Private Function UpdateRecords() Me.lstUnconfirmed.Requery End Function //Load events (After construction) Private Sub Form_Load() Set lstUnconfirmed.Recordset = mUnmatched_Clients //Link recordset into listbox Set Me.Recordset = mClient_Translations End Sub //Destruction method Private Sub Form_Close() Set conn = Nothing Set exception = Nothing Set lstUnconfirmed.Recordset = Nothing Set Me.Recordset = Nothing Set mUnmatched_Clients = Nothing Set mExcluded_Clients = Nothing Set mClient_Translations = Nothing End Sub

    Read the article

  • Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS

    - by Thomas Leopold
      Tuesday, February 22, 2011 5:00 PM - 6:00 PM CET Free Webinar Re-Engineering Legacy Oracle Forms Migration from Forms to ADF - A Case Study Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration. Learn about various migration options, challenges and best practices to protect your current investment in Oracle Forms. PL/SQL is without match for what it does: manipulating data in the database. If you blindly migrate all your PL/SQL to Java you will, in all probability, end up with less maintainable and less efficient code. Instead you should consider which code it best left as PL/SQL..." Grant Ronald - "Migrating Oracle Forms to Fusion: Myth or Magic Bullet?" Re-Engineering existing business logic is mandatory for your legacy Forms application to take advantage of the new Software architectures like ADF. The PITSS.CON solution combines the deep understanding of Oracle Forms and Reports applications architecture with powerful re-engineering capabilities that allows the developer community to protect the investment in the existing Forms applications and to concentrate on fine-tuning and customization of the modernized functionality rather than manually recreating every module and business logic from bottom up. Registration: https://www2.gotomeeting.com/register/971702250   PITSS GmbHKönigsdorferstrasse 25D-82515 WolfratshausenDo not forget to check out these Free Webinars in March! Thursday, March 3, 2011 Upgrade and Modernize Your Application to Forms 11g Registration/Information Tuesday, March 15, 2011 Shaping the Future for your Oracle Forms Application:Forms 11g, ADF, APEX Registration/Information Tuesday, March 29, 2011 Oracle Forms Modernization to APEX Registration/Information Registration is limited, so sign up  today!Presented By:        Grant Ronald, Senior Group Product Manager,Oracle       Magdalena Serban, Product Manager,PITSS   Contact Us:            PITSS in Americas +1 248.740.0935 [email protected] www.pitssamerica.com       PITSS in Europe +49 (0) 717287 5200 [email protected] www.pitss.com   White Paper:      From Oracle Forms to Oracle ADF and JEE     © Copyright 2010 PITSS GmbH, Wolfratshausen, Stuttgart, München; Managing Directors: Dipl.-Ing. Andreas Gaede, Michael Kilimann, Dipl.-Ing. Dirk Fleischmann Commercial Register: HRB 125471 at District Court Munich. All rights reserved. Any duplication or further treatment in any medium, in parts or as a whole, requires a written agreement. If you do not want to receive invitations for events, meetings and seminars from us, then please click here.

    Read the article

  • Free Book from Microsoft - Testing for Continuous Delivery with Visual Studio 2012

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/10/16/free-book-from-microsoft---testing-for-continuous-delivery-with.aspxAt  http://msdn.microsoft.com/en-us/library/jj159345.aspx, Microsoft have made available a free e-book - Testing for Continuous Delivery with Visual Studio 2012 "As more software projects adopt a continuous delivery cycle, testing threatens to be the bottleneck in the process. Agile development frequently revisits each part of the source code, but every change requires a re-test of the product. While the skills of the manual tester are vital, purely manual testing can't keep up. Visual Studio 2012 provides many features that remove roadblocks in the testing and debugging process and also help speed up and automate re-testing."

    Read the article

  • Numerical stability in continuous physics simulation

    - by Panda Pajama
    Pretty much all of the game development I have been involved with runs afoul of simulating a physical world in discrete time steps. This is of course very simple, but hardly elegant (not to mention mathematically inaccurate). It also has severe disadvantages when large values are involved (either very large speeds, or very large time intervals). I'm trying to make a continuous physics simulation, just for learning, which goes like this: time = get_time() while true do new_time = get_time() update_world(new_time - time) render() time = new_time end And update_world() is a continuous physical simulation. Meaning that for example, for an accelerated object, instead of doing object.x = object.x + object.vx * timestep object.vx = object.vx + object.ax * timestep -- timestep is fixed I'm doing something like object.x = object.x + object.vx * deltatime + object.ax * ((deltatime ^ 2) / 2) object.vx = object.vx + object.ax * deltatime However, I'm having a hard time with the numerical stability of my solutions, especially for very large time intervals (think of simulating a physical world for hundreds of thousands of virtual years). Depending on the framerate, I get wildly different solutions. How can I improve the numerical stability of my continuous physical simulations?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >