Search Results

Search found 28784 results on 1152 pages for 'start'.

Page 127/1152 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Media Player Problem

    - by kostas_menu
    button.setOnClickListener(new View.OnClickListener() { public void onClick(View v){ if(mp2.isPlaying()==true) {mp2.stop(); mp.start(); } else mp.start(); } }); button2.setOnClickListener(new View.OnClickListener() { public void onClick(View v){ if(mp.isPlaying()==true) {mp.stop();mp2.start();} else mp2.start(); } }); I press the first btn and the 1st song is playing.i press the second,the first stops and the second begins.But then, as i press the first btn, the second song stops but the first song is not playing...please help!!:)

    Read the article

  • Windows Service is not Working.

    - by prateeksaluja20
    I had made a windows service in visual studio 2008 in C#.inside the service i had written only single line code try { System.Diagnostics.Process.Start(@"E:\Users\Sk\Desktop\category.txt"); } catch { } then i add the project insatller & change the serviceProcessInstaller1 Account proerty as local system Also change the serviceInstaller1 start type proerty as Automatic. then i build the project.it was succesfull. after that i add another project that was setup project.i had added pprimary project output & i had added the custom action as "Primary output from DemoWindowsService (Active)".then buil the setup.setup was build sucessfully.then i install the setup & then went to services start the service.service stated properly butit was not performing the task. i had checked the path is correct & also i tried to do System.Diagnostics.Process.Start(@"E:\Windows\system32\notepad.exe") but still result is same.i tried alot but not getting the ans soleasse help me to solve this problem.

    Read the article

  • Replace for loop with formula

    - by hamax
    I have this loop that runs in O(end - start) and I would like to replace it with something O(1). If "width" wouldn't be decreasing, it would be pretty simple. for (int i = start; i <= end; i++, width--) if (i % 3 > 0) // 1 or 2, but not 0 z += width; start, end and width have positive values

    Read the article

  • Run Calculator in my App

    - by user
    I want to start the android calculator activity IN my activity. So, I can run an activity of my app in an activity of my app. But if I want to start an activity of the calculator for example in my app, logcat throws me this: FATAL EXCEPTION: main java.lang.RuntimeException: Unable to start activity ComponentInfo{}: java.lang.SecurityException: Requesting code from com.android.calculator2 (with uid 10016) to be run in process (with uid 10044) Caused by: java.lang.SecurityException: Requesting code from com.android.calculator2 (with uid 10016) to be run in process (with uid 10044) How can I reach my goal to start the calculator activity in my activity? Thank you very much

    Read the article

  • Threaded Python port scanner

    - by Amnite
    I am having issues with a port scanner I'm editing to use threads. This is the basics for the original code: for i in range(0, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() This takes approx 33 minutes to complete. So I thought I'd thread it to make it run a little faster. This is my first threading project so it's nothing too extreme, but I've ran the following code for about an hour and get no exceptions yet no output. Am I just doing the threading wrong or what? import threading from socket import * import time a = 0 b = 0 c = "" d = "" def ScanLow(): global a global c for i in range(0, 1000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() a += 1 def ScanHigh(): global b global d for i in range(1001, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : d = "Port %d: OPEN\n" % (i,) s.close() b += 1 Target = raw_input("Enter Host To Scan:") TargetIP = gethostbyname(Target) print "Start Scan On Host ", TargetIP Start = time.time() threading.Thread(target = ScanLow).start() threading.Thread(target = ScanHigh).start() e = a + b while e < 2000: f = raw_input() End = time.time() - Start print c print d print End g = raw_input()

    Read the article

  • Guidance regarding website development

    - by mehrozdurrani
    Dear Friends, I wana start start building a wesite yet i dun have much experience in it, i graduated 6 months back, so u can fairly estimate my calibre for makin a professional website. What i need is some bold guidance from all of u like how to start it or wat sources i can utilize to learn a proper procedure for developing a site .... Best Regards Mehroz

    Read the article

  • JAVA : How to get the positions of all matches in a String?

    - by user692704
    I have a text document and a query (the query could be more than one word). I want to find the position of all occurrences of the query in the document. I thought of the documentText.indexOf(query) and using regular expression but I could not make it work. I end up with the following method: First, I have create a dataType called QueryOccurrence public class QueryOccurrence implements Serializable{ public QueryOccurrence(){} private int start; private int end; public QueryOccurrence(int nameStart,int nameEnd,String nameText){ start=nameStart; end=nameEnd; } public int getStart(){ return start; } public int getEnd(){ return end; } public void SetStart(int i){ start=i; } public void SetEnd(int i){ end=i; } } Then, I have used this datatype in the following method: public static List<QueryOccurrence>FindQueryPositions(String documentText, String query){ // Normalize do the following: lower case, trim, and remove punctuation String normalizedQuery = Normalize.Normalize(query); String normalizedDocument = Normalize.Normalize(documentText); String[] documentWords = normalizedDocument.split(" ");; String[] queryArray = normalizedQuery.split(" "); List<QueryOccurrence> foundQueries = new ArrayList(); QueryOccurrence foundQuery = new QueryOccurrence(); int index = 0; for (String word : documentWords) { if (word.equals(queryArray[0])){ foundQuery.SetStart(index); } if (word.equals(queryArray[queryArray.length-1])){ foundQuery.SetEnd(index); if((foundQuery.End()-foundQuery.Start())+1==queryArray.length){ //add the found query to the list foundQueries.add(foundQuery); //flush the foundQuery variable to use it again foundQuery= new QueryOccurrence(); } } index++; } return foundQueries; } This method return a list of all occurrence of the query in the document each one with its position. Could you suggest any easer and faster way to accomplish this task. Thanks

    Read the article

  • How do I unbind another jQuery function on .click()?

    - by Mike Barwick
    I have this script that run to fix my menu bar to the browser on scroll. Nothing really needs to change here (works as it should). However, you may need it... var div = $('#wizMenuWrap'); var editor = $('#main_wrapper'); var start = $(div).offset().top; $(function fixedPackage(){ $.event.add(window, "scroll", function() { var p = $(window).scrollTop(); $(div).css('position',((p)>start) ? 'fixed' : 'static'); $(div).css('top',((p)>start) ? '0px' : ''); //Adds TOP margin to #main_wrapper (required) $(editor).css('position',((p)>start) ? 'relative' : 'static'); $(editor).css('top',((p)>start) ? '88px' : ''); }); }); Now for the issue at hand. I have another script function that calls a modal pop-up (which again works as it should). However, it's not slick from a UI perspective when I scroll the page when the modals open. So I want to disable the script above when the modal script below is called. In other words, when I click to open the modal pop-up, the script above shouldn't work. $(function () { var setUp = $('.setupButton'); // SHOWS SPECIFIED VIEW $(setUp).click(function () { $('#setupPanel').modal('show'); //PREVENTS PACKAGE SELECT FIXED POSITION ON SCROLL $(setUp).unbind('click',fixedPackage); }); }) As you can see above, I tried to unbind the scroll function (the first code snippet), but this is not correct. These two scripts are in two separate js libraries.

    Read the article

  • Why are my thread being terminated ?

    - by Sephy
    Hi, I'm trying to repeat calls to methods through 3 differents threads. But after I start my threads, during the next iteration of my loop, they are all terminated so nothing is executed... The code is as follows : public static void main(String[] args) { main = new Main(); pollingThread.start(); } static Thread pollingThread = new Thread() { @Override public void run() { while (isRunning) { main.poll(); // test the state of the threads try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } }; }; public void poll() { if (clientThread == null) { clientThread = new Thread(new Runnable() { @Override public void run() { //create some objects } }); clientThread.start(); } else if (clientThread.isAlive()) { // do some treatment } if (gestionnaireThread == null) { gestionnaireThread = new Thread(new Runnable() { @Override public void run() { //create some objects }; }); gestionnaireThread.start(); } else if (gestionnaireThread.isAlive()) { // do some treatment } if (marchandThread == null) { marchandThread = new Thread(new Runnable() { @Override public void run() { // create some objects }; }); marchandThread.start(); } else if (marchandThread.isAlive()) { // do some treatment } } And for some reason, when I test the state of my different threads, they appear as runnable and then a the 2nd iteration, they are all terminated... What am I doing wrong? I actually have no error, but the threads are terminated and so my loop keeps looping and telling me the threads are terminated.... Thanks for any help.

    Read the article

  • java.util.EmptyStackException on JIT/Warmup

    - by infectedrhythms
    I'm using a 3rd party lib in my application that throws a java.util.EmptyStackException This only happens during the VM JIT/Warmup Start application Start stress test no rampup. java.util.EmptyStackException thrown Keep application and redo stress test. No exception thrown Shutdown application Start application Start stress test with rampup. No exception thrown I could keep reproducing this over and over. Anyone have any ideas on how I can trace this so I can give more info to the vendor? Or why it could even be happening? Thanks

    Read the article

  • Generics not so generic !!

    - by Aymen
    Hi I tried to implement a generic binary search algorithm in scala. Here it is : type Ord ={ def <(x:Any):Boolean def >(x:Any):Boolean } def binSearch[T <: Ord ](x:T,start:Int,end:Int,t:Array[T]):Boolean = { if (start > end) return false val pos = (start + end ) / 2 if(t(pos)==x) true else if (t(pos) < x) binSearch(x,pos+1,end,t) else binSearch(x,start,pos-1,t) } everything is OK until I tried to actually use it (xD) : binSearch(3,0,4,Array(1,2,5,6)) the compiler is pretending that Int not a member of Ord, well what shall I do to solve this ? Thanks

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle is providing a large range of interesting solutions to ensure High Availability of the database. Dataguard, RAC or even both configurations (as recommended by Oracle for a Maximum Available Architecture - MAA) are the most frequently found and used solutions. But, when it comes to protecting a system with an Active/Passive architecture with failover capabilities, people often thinks to other expensive third party cluster systems. Oracle Clusterware technology, which comes along at no extra-cost with Oracle Database or Oracle Unbreakable Linux, is - in the knowing of most people - often linked to Oracle RAC and therefore, is seldom used to implement failover solutions. Oracle Clusterware 11gR2  (a part of Oracle 11gR2 Grid Infrastructure)  provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make "failover-able'", and then to protect, almost any kind of application (from the simple xclock to the most complex Application Server). Quoting Oracle: “Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC). In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a cluster.” In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional on my system. Unfortunately, there can always be a typo or a mistake. This blog entry does not replace a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it...  Note : This entry has been revised (rev.2) following comments from Philip Newlan. Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with an Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 - Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let's name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill -9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let's save the definition of this resource, for future use : mkdir -p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let's remove it from the OCR definitions, we don't need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'clean')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown abort    ##for i in `ps -ef | grep -i $ORACLE_SID | awk '{print $2}' ` ;do kill -9 $i; done EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop, clean and check the database. It is self-explaining and contains nothing special. Just be aware that it must be runnable (+x), it runs as Oracle user (because of the ACL property - see later) and needs to know about the environment. Also make sure it exists on every node of the cluster. Moreover, as of 11.2, the clean method is mandatory. It must provide the “last gasp clean up”, for example, a shutdown abort or a kill –9 of all the remaining processes. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more "MS$ like" method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions. References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference Clusterware for Unbreakable Linux Using Oracle Clusterware to Protect A Single Instance Oracle Database 11gR1 (to have an idea of complexity) Oracle Clusterware on OTN   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • 8-Puzzle Solution executes infinitely [migrated]

    - by Ashwin
    I am looking for a solution to 8-puzzle problem using the A* Algorithm. I found this project on the internet. Please see the files - proj1 and EightPuzzle. The proj1 contains the entry point for the program(the main() function) and EightPuzzle describes a particular state of the puzzle. Each state is an object of the 8-puzzle. I feel that there is nothing wrong in the logic. But it loops forever for these two inputs that I have tried : {8,2,7,5,1,6,3,0,4} and {3,1,6,8,4,5,7,2,0}. Both of them are valid input states. What is wrong with the code? Note For better viewing copy the code in a Notepad++ or some other text editor(which has the capability to recognize java source file) because there are lot of comments in the code. Since A* requires a heuristic, they have provided the option of using manhattan distance and a heuristic that calculates the number of misplaced tiles. And to ensure that the best heuristic is executed first, they have implemented a PriorityQueue. The compareTo() function is implemented in the EightPuzzle class. The input to the program can be changed by changing the value of p1d in the main() function of proj1 class. The reason I am telling that there exists solution for the two my above inputs is because the applet here solves them. Please ensure that you select 8-puzzle from teh options in the applet. EDITI gave this input {0,5,7,6,8,1,2,4,3}. It took about 10 seconds and gave a result with 26 moves. But the applet gave a result with 24 moves in 0.0001 seconds with A*. For quick reference I have pasted the the two classes without the comments : EightPuzzle import java.util.*; public class EightPuzzle implements Comparable <Object> { int[] puzzle = new int[9]; int h_n= 0; int hueristic_type = 0; int g_n = 0; int f_n = 0; EightPuzzle parent = null; public EightPuzzle(int[] p, int h_type, int cost) { this.puzzle = p; this.hueristic_type = h_type; this.h_n = (h_type == 1) ? h1(p) : h2(p); this.g_n = cost; this.f_n = h_n + g_n; } public int getF_n() { return f_n; } public void setParent(EightPuzzle input) { this.parent = input; } public EightPuzzle getParent() { return this.parent; } public int inversions() { /* * Definition: For any other configuration besides the goal, * whenever a tile with a greater number on it precedes a * tile with a smaller number, the two tiles are said to be inverted */ int inversion = 0; for(int i = 0; i < this.puzzle.length; i++ ) { for(int j = 0; j < i; j++) { if(this.puzzle[i] != 0 && this.puzzle[j] != 0) { if(this.puzzle[i] < this.puzzle[j]) inversion++; } } } return inversion; } public int h1(int[] list) // h1 = the number of misplaced tiles { int gn = 0; for(int i = 0; i < list.length; i++) { if(list[i] != i && list[i] != 0) gn++; } return gn; } public LinkedList<EightPuzzle> getChildren() { LinkedList<EightPuzzle> children = new LinkedList<EightPuzzle>(); int loc = 0; int temparray[] = new int[this.puzzle.length]; EightPuzzle rightP, upP, downP, leftP; while(this.puzzle[loc] != 0) { loc++; } if(loc % 3 == 0){ temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 1]; temparray[loc + 1] = 0; rightP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); rightP.setParent(this); children.add(rightP); }else if(loc % 3 == 1){ //add one child swaps with right temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 1]; temparray[loc + 1] = 0; rightP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); rightP.setParent(this); children.add(rightP); //add one child swaps with left temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 1]; temparray[loc - 1] = 0; leftP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); leftP.setParent(this); children.add(leftP); }else if(loc % 3 == 2){ // add one child swaps with left temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 1]; temparray[loc - 1] = 0; leftP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); leftP.setParent(this); children.add(leftP); } if(loc / 3 == 0){ //add one child swaps with lower temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 3]; temparray[loc + 3] = 0; downP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); downP.setParent(this); children.add(downP); }else if(loc / 3 == 1 ){ //add one child, swap with upper temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 3]; temparray[loc - 3] = 0; upP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); upP.setParent(this); children.add(upP); //add one child, swap with lower temparray = this.puzzle.clone(); temparray[loc] = temparray[loc + 3]; temparray[loc + 3] = 0; downP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); downP.setParent(this); children.add(downP); }else if (loc / 3 == 2 ){ //add one child, swap with upper temparray = this.puzzle.clone(); temparray[loc] = temparray[loc - 3]; temparray[loc - 3] = 0; upP = new EightPuzzle(temparray, this.hueristic_type, this.g_n + 1); upP.setParent(this); children.add(upP); } return children; } public int h2(int[] list) // h2 = the sum of the distances of the tiles from their goal positions // for each item find its goal position // calculate how many positions it needs to move to get into that position { int gn = 0; int row = 0; int col = 0; for(int i = 0; i < list.length; i++) { if(list[i] != 0) { row = list[i] / 3; col = list[i] % 3; row = Math.abs(row - (i / 3)); col = Math.abs(col - (i % 3)); gn += row; gn += col; } } return gn; } public String toString() { String x = ""; for(int i = 0; i < this.puzzle.length; i++){ x += puzzle[i] + " "; if((i + 1) % 3 == 0) x += "\n"; } return x; } public int compareTo(Object input) { if (this.f_n < ((EightPuzzle) input).getF_n()) return -1; else if (this.f_n > ((EightPuzzle) input).getF_n()) return 1; return 0; } public boolean equals(EightPuzzle test){ if(this.f_n != test.getF_n()) return false; for(int i = 0 ; i < this.puzzle.length; i++) { if(this.puzzle[i] != test.puzzle[i]) return false; } return true; } public boolean mapEquals(EightPuzzle test){ for(int i = 0 ; i < this.puzzle.length; i++) { if(this.puzzle[i] != test.puzzle[i]) return false; } return true; } } proj1 import java.util.*; public class proj1 { /** * @param args */ public static void main(String[] args) { int[] p1d = {1, 4, 2, 3, 0, 5, 6, 7, 8}; int hueristic = 2; EightPuzzle start = new EightPuzzle(p1d, hueristic, 0); int[] win = { 0, 1, 2, 3, 4, 5, 6, 7, 8}; EightPuzzle goal = new EightPuzzle(win, hueristic, 0); astar(start, goal); } public static void astar(EightPuzzle start, EightPuzzle goal) { if(start.inversions() % 2 == 1) { System.out.println("Unsolvable"); return; } // function A*(start,goal) // closedset := the empty set // The set of nodes already evaluated. LinkedList<EightPuzzle> closedset = new LinkedList<EightPuzzle>(); // openset := set containing the initial node // The set of tentative nodes to be evaluated. priority queue PriorityQueue<EightPuzzle> openset = new PriorityQueue<EightPuzzle>(); openset.add(start); while(openset.size() > 0){ // x := the node in openset having the lowest f_score[] value EightPuzzle x = openset.peek(); // if x = goal if(x.mapEquals(goal)) { // return reconstruct_path(came_from, came_from[goal]) Stack<EightPuzzle> toDisplay = reconstruct(x); System.out.println("Printing solution... "); System.out.println(start.toString()); print(toDisplay); return; } // remove x from openset // add x to closedset closedset.add(openset.poll()); LinkedList <EightPuzzle> neighbor = x.getChildren(); // foreach y in neighbor_nodes(x) while(neighbor.size() > 0) { EightPuzzle y = neighbor.removeFirst(); // if y in closedset if(closedset.contains(y)){ // continue continue; } // tentative_g_score := g_score[x] + dist_between(x,y) // // if y not in openset if(!closedset.contains(y)){ // add y to openset openset.add(y); // } // } // } } public static void print(Stack<EightPuzzle> x) { while(!x.isEmpty()) { EightPuzzle temp = x.pop(); System.out.println(temp.toString()); } } public static Stack<EightPuzzle> reconstruct(EightPuzzle winner) { Stack<EightPuzzle> correctOutput = new Stack<EightPuzzle>(); while(winner.getParent() != null) { correctOutput.add(winner); winner = winner.getParent(); } return correctOutput; } }

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle provides many interesting ways to ensure High Availability. Dataguard configurations, RAC configurations or even both (as recommended for a Maximum Available Architecture - MAA) are the most frequently found. But when it comes to protecting a system with an Active/Passive architecture with failover capabilities, one often thinks to expensive third party cluster systems. Oracle Clusterware technology, which comes free with Oracle Database, is – in the knowing of most people - often linked to Oracle RAC and therefore, is rarely used to implement failover solutions. 11gR2 Clusterware – which is part of Oracle Grid Infrastructure - provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make “failover-able'” and, therefore to protect, almost every kind of application (from xclock to the more complex Application Server) In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional. Unfortunately, there can always be a typo or a mistake. This blog entry is not a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it…   Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 – Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let’s name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill –9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let’s save the definition of this resource, for future use : mkdir –p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let’s remove it from the OCR definitions, we don’t need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop and check the database. It is self-explaining and contains nothing special. Just be aware that it is run as Oracle user (because of the ACL property – see later) and needs to know about the environment. It also needs to be present on every node of the cluster. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more “MS$ like” method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions.   References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • ??GoldenGate Replicat?HANDLECOLLISIONS??

    - by Liu Maclean(???)
    HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????(????error mapping????,???????discard??),??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ??1 target??delete??(missing delete) : C:\Users\ML>sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 18 13:38:03 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> conn sender/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> insert into handlec values(3,2); 1 row created. SQL> insert into handlec values(4,2); 1 row created. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 3 2 4 2 target : SQL> conn receiver/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> commit; SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 SQL> GGSCI (XIANGBLI-CN) 1> alter extract load2 , begin now EXTRACT altered. GGSCI (XIANGBLI-CN) 4> alter replicat rep2, begin now REPLICAT altered. GGSCI (XIANGBLI-CN) 13> add trandata sender.* Logging of supplemental redo data enabled for table SENDER.HANDLEC. Logging of supplemental redo log data is already enabled for table SENDER.TV. GGSCI (XIANGBLI-CN) 14> start mgr MGR is already running. GGSCI (XIANGBLI-CN) 15> start er * Sending START request to MANAGER ... EXTRACT LOAD2 starting Sending START request to MANAGER ... REPLICAT REP2 starting GGSCI (XIANGBLI-CN) 16> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING LOAD2 00:00:00 00:00:01 REPLICAT RUNNING REP2 00:00:00 00:00:08 ***SOURCE?????TARGET????? SQL> delete handlec where t1=3; 1 row deleted. SQL> commit; Commit complete. ??SQL error 1403??,REPLICAT ABORT 2012-09-18 13:45:48 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL ). 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. 2012-09-18 13:45:48 WARNING OGG-01154 SQL error 1403 mapping SENDER.HANDLEC to RECEIVER.HANDLEC OCI Error ORA-01403: no data found, SQL . 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:45:48 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. *********************************************************************** * ** Run Time Statistics ** * *********************************************************************** Last record for the last committed transaction is the following: ___________________________________________________________________ Trail name : D:\ogg\V34342-01\ex\ze000003 Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: B (x42) RecLength : 9 (x0009) IO Time : 2012-09-18 13:45:38.000000 IOType : 3 (x03) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 44 AuditPos : 3337232 Continued : N (x00) RecCount : 1 (x01) 2012-09-18 13:45:38.000000 Delete Len 9 RBA 1091 Name: SENDER.HANDLEC ___________________________________________________________________ Reading D:\ogg\V34342-01\ex\ze000003, current RBA 1091, 0 records Report at 2012-09-18 13:45:48 (activity since 2012-09-18 13:45:48) From Table SENDER.HANDLEC to RECEIVER.HANDLEC: # inserts: 0 # updates: 0 # deletes: 0 # discards: 1 Last log location read: FILE: D:\ogg\V34342-01\ex\ze000003 SEQNO: 3 RBA: 1091 TIMESTAMP: 2012-09-18 13:45:38.000000 EOF: NO READERR: 0 2012-09-18 13:45:48 ERROR OGG-01668 PROCESS ABENDING. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE1.TRC closed. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE2.TRC closed. CACHE OBJECT MANAGER statistics CACHE MANAGER VM USAGE vm current = 0 vm anon queues = 0 vm anon in use = 0 vm file = 0 vm used max = 0 ==> CACHE BALANCED CACHE CONFIGURATION cache size = 2G cache force paging = 3.41G buffer min = 64K buffer highwater = 8M pageout eligible size = 8M ================================================================================ ??skiptransaction???????? GGSCI (XIANGBLI-CN) 18> start rep2 skiptransaction Sending START request to MANAGER ... REPLICAT REP2 starting ??2 target??update??(missing update),???????? : ???????, ??source????????? SQL> update handlec set t1=5 where t1=4; 1 row updated. SQL> commit; Commit complete. ???target ????(miss update)??????? Database error 1403+OGG-01296 2012-09-18 13:49:30 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "RECEIVER"."HANDLEC" SET "T1" = :a1 WHERE "T1" = :b0>). 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:49:30 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. ??HANDLECOLLISIONS?,rep??????????discard?? GGSCI (XIANGBLI-CN) 23> view params rep2 replicat rep2 userid receiver , password oracle trace ./rep_trace1.trc trace2 ./rep_trace2.trc ASSUMETARGETDEFS HANDLECOLLISIONS map sender.*, target receiver.*; GGSCI (XIANGBLI-CN) 18> start rep2 SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 5 ????T1=5 T2 NULL?????? ,??update?????????????,??replicat??????????????update????????????????,?????T2 ?NULL ,????????????EXTRACT??PKUPDATE??? ????????FETCHOPTIONS FETCHPKUPDATECOLS ????????EXTRACT?????,???EXTRACT? ????extract???????????? ??????: SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 20 200 SQL> delete handlec where t1=5; 1 row deleted. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 SQL> conn sender/oracle Connected. SQL> update handlec set t1=t1+1000 where t1=5; 1 row updated. SQL> commit; Commit complete. SQL> conn receiver/oracle Connected. SQL> SQL> SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 1005 2 ???????FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??3 ????????????target??,???replicat???UPDATE??????????????: *** TARGET SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 9 5 target????? t1=10 t2=9??? ,????source???(10,100)??? >>SOURCE SQL> insert into handlec values(10,100); 1 row created. SQL> commit; >>TARGET SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 ???????source?insert??,???target???????????????HANDLECOLLISIONS?REPLICAT???UPDATE??????COLUMNS ?? HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????,??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ?:???????????Insert/Delete??,????????????????Replicat?????abend,????? ???????????,??target??HANDLECOLLISIONS??update??,?????INSERT??????,???????????????,FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??????send ??????HANDLECOLLISIONS GGSCI (XIANGBLI-CN) 29> send rep2, NOHANDLECOLLISIONS Sending NOHANDLECOLLISIONS request to REPLICAT REP2 ... REP2 NOHANDLECOLLISIONS set for 1 tables and 0 wildcard entries

    Read the article

  • WDS 2008 R2 DHCP Error

    - by scampbell
    Im having a problem where I get the error 'An error occurred while obtaining an IP address from the DHCP server. Please check to ensure that there is an operational DHCP server on this network segment' when booting from a standard WDS boot.wim image taken from a Windows 7 DVD. I am using Server 2008 R2 and am adding the drivers to the boot using WDS, but also have the problem if the drivers are injected beforehand using DISM. When the error occurs I can shift + F10 and IPCONFIG and see it HAS picked up an internal IP from DHCP. Seems maybe it is timing out before it gets the IP? DHCP server is not on the WDS box but is in the same subnet. As per some fixes I have read I enabled RSTP on my switches but that didnt help. I have included the end of setupact.log to see if any of you have any ideas. Seems to be failing but as I say, the network IS initialized as I can see the internal IP assigned by DHCP when running IPCONFIG. I dont suppose theres any way of increasing the timeout? Thanks. 2011-04-11 17:26:31, Info [0x0b0022] WDS StartNetworking: Trying to start networking. 2011-04-11 17:26:31, Info WDS Network service dhcp not running or could not be queried: 264d00 1 1 2011-04-11 17:26:31, Info WDS Network service lmhosts not running or could not be queried: 264e18 1 1 2011-04-11 17:26:31, Info WDS Network service lanmanworkstation not running or could not be queried: 264d00 1 1 2011-04-11 17:26:31, Info WDS Network service bfe not running or could not be queried: 264e18 1 1 2011-04-11 17:26:31, Info WDS Network service ikeext not running or could not be queried: 264d00 1 1 2011-04-11 17:26:31, Info WDS Network service mpssvc not running or could not be queried: 264e18 1 1 2011-04-11 17:27:24, Info WDS Installing device pci\ven_14e4&dev_1691&subsys_04aa1028 X:\WINDOWS\INF\oem37.inf succeeded 2011-04-11 17:27:25, Info WDS No computer name specified, generating a random name. 2011-04-11 17:27:25, Info WDS Renaming computer to MININT-VN2P876. 2011-04-11 17:27:25, Info WDS Acquired profiling mutex 2011-04-11 17:27:25, Info WDS Service winmgmt disable: 0x00000000 2011-04-11 17:27:25, Info WDS Service winmgmt stop: 0x00000000 2011-04-11 17:27:25, Info WDS Service winmgmt enable: 0x00000000 2011-04-11 17:27:25, Info WDS Released profiling mutex 2011-04-11 17:27:25, Info WDS Acquired profiling mutex 2011-04-11 17:27:25, Info WDS Install MS_MSCLIENT: 0x0004a020 2011-04-11 17:27:25, Info WDS Install MS_NETBIOS: 0x0004a020 2011-04-11 17:27:25, Info WDS Install MS_SMB: 0x0004a020 2011-04-11 17:27:25, Info WDS Install MS_TCPIP6: 0x0004a020 2011-04-11 17:27:26, Info WDS Install MS_TCPIP: 0x0004a020 2011-04-11 17:27:26, Info WDS Service dhcp start: 0x00000000 2011-04-11 17:27:26, Info WDS Service lmhosts start: 0x00000000 2011-04-11 17:27:26, Info WDS Service ikeext start: 0x00000000 2011-04-11 17:27:26, Info WDS Service mpssvc start: 0x00000000 2011-04-11 17:27:26, Info WDS Released profiling mutex 2011-04-11 17:27:26, Info WDS Spent 967ms installing network components 2011-04-11 17:27:28, Info WDS Spent 2247ms installing network drivers 2011-04-11 17:27:38, Info WDS QueryAdapterStatus: no operational adapters found. 2011-04-11 17:27:38, Info WDS Spent 10140ms confirming network initialization; status 0x80004005 2011-04-11 17:27:38, Info WDS WaitForNetworkToInitialize failed; ignoring error 2011-04-11 17:27:38, Info WDS GetNetworkingInfo: WpeNetworkStatus returned [0x0]. Flags set: 2011-04-11 17:27:38, Error [0x0b003f] WDS StartNetworking: Failed to start networking. Error code [0x800704C6].[gle=0x000000cb] 2011-04-11 17:27:38, Info [0x0640ae] IBSLIB PublishMessage: Publishing message [WdsClient: An error occurred while obtaining an IP address from the DHCP server. Please check to ensure that there is an operational DHCP server on this network segment.]

    Read the article

  • APACHE2.2/WIN2003(32-bit)/PHP: How do I configure Apache to Run Background PHP Processes on Win 2003

    - by Captain Obvious
    I have a script, testforeground.php, that kicks off a background script, testbackground.php, then returns while the background script continues to run until it's finished. Both the foreground and background scripts write to the output file correctly when I run the foreground script from the command line using php-cgi: C:\>php-cgi testforeground.php The above command starts a php-cgi.exe process, then a php-win.exe process, then closes the php-cgi.exe almost immediately, while the php-win.exe continues until it's finished. The same script runs correctly but does not have permission to write to the output file when I run it from the command line using plain php: C:\>php testforeground.php AND when I run the same script from the browser, instead of php-cgi.exe, a single cmd.exe process opens and closes almost instantly, only the foreground script writes to the output file, and it doesn't appear that the 2nd process starts: http://XXX/testforeground.php Here is the server info: OS: Win 2003 32-bit HTTP: Apache 2.2.11 PHP: 5.2.13 Loaded Modules: core mod_win32 mpm_winnt http_core mod_so mod_actions mod_alias mod_asis mod_auth_basic mod_authn_default mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_dir mod_env mod_include mod_isapi mod_log_config mod_mime mod_negotiation mod_setenvif mod_userdir mod_php5 Here's the foreground script: <?php ini_set("display_errors",1); error_reporting(E_ALL); echo "<pre>loading page</pre>"; function run_background_process() { file_put_contents("0testprocesses.txt","foreground start time = " . time() . "\n"); echo "<pre> foreground start time = " . time() . "</pre>"; $command = "start /B \"{$_SERVER['CMS_PHP_HOMEPATH']}\php-cgi.exe\" {$_SERVER['CMS_HOMEPATH']}/testbackground.php"; $rp = popen($command, 'r'); if(isset($rp)) { pclose($rp); } echo "<pre> foreground end time = " . time() . "</pre>"; file_put_contents("0testprocesses.txt","foreground end time = " . time() . "\n", FILE_APPEND); return true; } echo "<pre>calling run_background_process</pre>"; $output = run_background_process(); echo "<pre>output = $output</pre>"; echo "<pre>end of page</pre>"; ?> And the background script: <?php $start = "background start time = " . time() . "\n"; file_put_contents("0testprocesses.txt",$start, FILE_APPEND); sleep(10); $end = "background end time = " . time() . "\n"; file_put_contents("0testprocesses.txt", $end, FILE_APPEND); ?> I've confirmed that the above scripts work correctly using Apache 2.2.3 on Linux. I'm sure I just need to change some Apache and/or PHP config settings, but I'm not sure which ones. I've been muddling over this for too long already, so any help would be appreciated.

    Read the article

  • How do I configure Apache 2.2 to Run Background PHP Processes on Win 2003?

    - by Captain Obvious
    I have a script, testforeground.php, that kicks off a background script, testbackground.php, then returns while the background script continues to run until it's finished. Both the foreground and background scripts write to the output file correctly when I run the foreground script from the command line using php-cgi: C:\>php-cgi testforeground.php The above command starts a php-cgi.exe process, then a php-win.exe process, then closes the php-cgi.exe almost immediately, while the php-win.exe continues until it's finished. The same script runs correctly but does not have permission to write to the output file when I run it from the command line using plain php: C:\>php testforeground.php AND when I run the same script from the browser, instead of php-cgi.exe, a single cmd.exe process opens and closes almost instantly, only the foreground script writes to the output file, and it doesn't appear that the 2nd process starts: http://XXX/testforeground.php Here is the server info: OS: Win 2003 32-bit HTTP: Apache 2.2.11 PHP: 5.2.13 Loaded Modules: core mod_win32 mpm_winnt http_core mod_so mod_actions mod_alias mod_asis mod_auth_basic mod_authn_default mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_dir mod_env mod_include mod_isapi mod_log_config mod_mime mod_negotiation mod_setenvif mod_userdir mod_php5 Here's the foreground script: <?php ini_set("display_errors",1); error_reporting(E_ALL); echo "<pre>loading page</pre>"; function run_background_process() { file_put_contents("0testprocesses.txt","foreground start time = " . time() . "\n"); echo "<pre> foreground start time = " . time() . "</pre>"; $command = "start /B \"{$_SERVER['CMS_PHP_HOMEPATH']}\php-cgi.exe\" {$_SERVER['CMS_HOMEPATH']}/testbackground.php"; $rp = popen($command, 'r'); if(isset($rp)) { pclose($rp); } echo "<pre> foreground end time = " . time() . "</pre>"; file_put_contents("0testprocesses.txt","foreground end time = " . time() . "\n", FILE_APPEND); return true; } echo "<pre>calling run_background_process</pre>"; $output = run_background_process(); echo "<pre>output = $output</pre>"; echo "<pre>end of page</pre>"; ?> And the background script: <?php $start = "background start time = " . time() . "\n"; file_put_contents("0testprocesses.txt",$start, FILE_APPEND); sleep(10); $end = "background end time = " . time() . "\n"; file_put_contents("0testprocesses.txt", $end, FILE_APPEND); ?> I've confirmed that the above scripts work correctly using Apache 2.2.3 on Linux. I'm sure I just need to change some Apache and/or PHP config settings, but I'm not sure which ones. I've been muddling over this for too long already, so any help would be appreciated.

    Read the article

  • batch file to deploy files

    - by Martin Michalak
    hi I have created batch file which pulls info from *.txt file and deploy code from the source to destination: SET Source=%1 if exist %Source% ( ECHO Source for WEB exists ) else ( ECHO Wrong build%Source% doesn't exist GOTO Menu ) SET Server=%2 SET AppPool=%3 SET Destination=%4 SET Folder=%5 SET ENV=%6 SET AppName=%7 SET Envlog=%8 ECHO Deployment of WEB > %Envlog% %Date% %Time% echo. @ECHO Stopping App Pools @ECHO Stopping App Pools >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE -d \\%Server% cmd.exe /c c:\windows\system32\inetsrv\appcmd STOP apppool /apppool.name:%AppPool% echo. @ECHO App Pools will be stopped in the background @ECHO App Pools will be stopped in the background >> %Envlog% %Date% %Time% Pause echo. IF EXIST "%Destination%" ( ECHO Deleting %AppName% %Folder% RMDIR %Destination% /s /q ECHO Destination Folder %Folder% Deleted ECHO Destination Folder %Folder% Deleted >> %Envlog% %Date% %Time% ) else ( ECHO Destination Folder %Destination% does not exist, please check ECHO Destination Folder %Destination% does not exist, please check >> %Envlog% %Date% %Time% Pause ) echo. @ECHO Starting Robocopy for %AppName% @ECHO Starting Robocopy for %AppName% >> %Envlog% %Date% %Time% echo. START /WAIT /MIN ROBOCOPY.EXE %Source% %Destination% *.* /S /NP /R:3 /W:5 /LOG:"Logs\Robo%AppName%%ENV%.log" D:\Tools\Windiff\windiff.exe %Source% %Destination% echo. @ECHO Finished with Robocopy @ECHO Finished with Robocopy >> %Envlog% %Date% %Time% echo. @ECHO Checking if App pools stopped: @ECHO Checking if App pools stopped: >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE \\%Server% c:\windows\system32\inetsrv\appcmd LIST apppool /apppool.name:%AppPool% @echo off set /p ask=All app pools stopped? (y/n) if %ask%==y (echo Great, please continue with deployemnt) else echo Before continuing please check why app pools did not stop @echo App pools stopped?: %ask% >> %Envlog% %Date% %Time% DEL %Source%\web.config echo. @ECHO Production Config check if exist "%Destination%\%ENV%-Web.config" ( echo. ECHO The Application production configuration file does exist. ECHO The Application production configuration file does exist. >> %Envlog% %Date% %Time% COPY %Destination%\%ENV%-Web.config web.config echo. ECHO Production %ENV%-Web.config has been renamed to web.config ECHO Production %ENV%-Web.config has been renamed to web.config >> %Envlog% %Date% %Time% ) else ( ECHO The Application production configuration file is missing in Production %AppName% ECHO The Application production configuration file is missing in Production %AppName% >> %Envlog% %Date% %Time% explorer %Destination% Pause ) echo. @ECHO Confirm that configs were renamed correclty, if yes please hit any key to START APP Pools @ECHO Confirm that configs were renamed correclty, if yes please hit any key to START APP Pools >> %Envlog% %Date% %Time% Pause echo. @ECHO Start %AppName% Application Pool >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE \\%Server% c:\windows\system32\inetsrv\appcmd START apppool /apppool.name:%AppPool% @echo off set /p ask=All app pools started? (y/n) if %ask%==y (echo Great, please continue with deployemnt) else echo Before continuing please check why app pools did not start @echo App pools started?: %ask% >> %Envlog% %Date% %Time% Pause echo. @ECHO Build Version for %AppName% @ECHO Build Version for %AppName% >> %Envlog% %Date% %Time% type %Destination%\buildinfo.xml echo. ECHO ............................................... @ECHO ...........Deployment Compelted................ @ECHO ...........Deployment Compelted................>> %Envlog% %Date% %Time% ECHO ............................................... here are my issues: Lets say I am running code for 3 servers, then for each instance: For all three servers I am performing destination folder delete even so destination folder is always the same, the code should only delete it in the 1st instance (when code is deployed to first server) then I would prefer if script would check if the code from the source and destination is the same and if it is it should delete the folder or not. Then based on 1: a) deleting web.config and renaming should only happen if code in destination is new b) Robocopy should not override files if they are the same I think there is /Xo option to do that any idea how to achieve that? :)

    Read the article

  • Touch gestures in IE not working without explorer.exe being run once

    - by Michael
    Edit: Rephrasing my question: Upon further troubleshooting, I can conclude that: Touch gestures (dragging, pinch to zoom, touch-and-hold right click) in Internet Explorer start to work when: The system has been running for ~2 minutes. This coincides with the delayed start of services. Explorer.exe is being run, then killed. I assume Explorer.exe starts some services? The services with delayed start are as follows: Security Center Software Protection Windows Defender, Search and Update Windows Font Cache Service Microsoft .NET Framework NGEN v4.0.30319_X64 and X86 I see no connection between these services and touch gestures, but just in case, I manually tried starting these services, but without luck. What else happens delayed after system boot, which also happens when explorer is started? Old question: Details: Internet Explorer 9 and Windows 7 Professional, running on a HP TouchSmart (touch screen PC). It is going to be a kiosk PC (running a custom GUI for displaying websites). Scenario 1: When running Internet Explorer as a normal program in Windows 7, touch functions work perfectly. I can scroll the website by dragging it with my finger, I can pinch zoom and I can touch-and-hold right click. I now change the default shell in Windows to Internet Explorer (ie. IE starts instead of explorer.exe). Internet Explorer of course starts up when logging in. However, touch functions are reduced to basic clicking (no dragging, no pinch zooming, no touch-and-hold right click). Then I manually start explorer.exe, and the touch functions work again! And here is the weird part: When I kill explorer.exe, the touch functions keeps working - even if I close IE and start a new instance. Scenario 2: The exact same, but instead of changing the default shell to Internet Explorer, I change it to my own program, which uses an embedded Internet Explorer ("WebBrowser"). Same thing happens. What I've tried: Autorun programs: When explorer.exe launches, it launches all the autorun programs. There are no relevant programs being run by explorer, but just in case, I have manually started all the autorun programs, so that it is identical (but without explorer.exe) to a normal login. It still does not work (until I launch explorer.exe). Specifically TabTip.exe, TabTip32.exe and wisptis.exe are all running. All services are also started. To sum it up Running explorer.exe once changes something in the touch capabilities of Internet Explorer. It doesn't matter if explorer.exe is running - as long as it has been run once. Does anyone know what causes this behavior? Or how I can circumvent it neatly?

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Memcached Debuging/Server Logs Monitor the Memcached Servers?

    - by user1179459
    I have chat engine which is based on the Memcached variables, putting them into arrays and reading them in other end via jquery, which works fine 95% of the times, however when the server load is high memcached (presume its the memcached) the crash and browser gets stucks up. I dont think its jquery issue since this only happens when the server load is very high. I need a way to monitor the memcached servers or somehow write a log file into where the fails/errors comes in... Any idea on how i can do this ? or any idea why memcached servers fails ? I run the memcached as follows $GLOBALS['MemCached'] = FALSE; $GLOBALS['MemCached'] = new Memcache; $GLOBALS['MemCached']->pconnect('localhost', 11211); My memcached config is as follows #! /bin/sh # # chkconfig: - 55 45 # description: The memcached daemon is a network memory cache service. # processname: memcached # config: /etc/sysconfig/memcached # pidfile: /var/run/memcached/memcached.pid # Standard LSB functions #. /lib/lsb/init-functions # Source function library. . /etc/init.d/functions PORT=11211 USER=memcached MAXCONN=1024 CACHESIZE=128 OPTIONS="" if [ -f /etc/sysconfig/memcached ];then . /etc/sysconfig/memcached fi # Check that networking is up. . /etc/sysconfig/network if [ "$NETWORKING" = "no" ] then exit 0 fi RETVAL=0 prog="memcached" pidfile=${PIDFILE-/var/run/memcached/memcached.pid} lockfile=${LOCKFILE-/var/lock/subsys/memcached} start () { echo -n $"Starting $prog: " # Ensure that /var/run/memcached has proper permissions if [ "`stat -c %U /var/run/memcached`" != "$USER" ]; then chown $USER /var/run/memcached fi daemon --pidfile ${pidfile} memcached -d -p $PORT -u $USER -m $CACHESIZE -c $MAXCONN -P ${pidfile} $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch ${lockfile} } stop () { echo -n $"Stopping $prog: " killproc -p ${pidfile} /usr/bin/memcached RETVAL=$? echo if [ $RETVAL -eq 0 ] ; then rm -f ${lockfile} ${pidfile} fi } restart () { stop start } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status -p ${pidfile} memcached RETVAL=$? ;; restart|reload|force-reload) restart ;; condrestart|try-restart) [ -f ${lockfile} ] && restart || : ;; *) echo $"Usage: $0 {start|stop|status|restart|reload|force-reload|condrestart|try-restart}" RETVAL=2 ;; esac exit $RETVAL

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >