Search Results

Search found 35007 results on 1401 pages for 'test management'.

Page 251/1401 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • How can I test a CRON job with PHP?

    - by alex
    This is the first time I've ever used a CRON. I'm using it to parse external data that is automatically FTP'd to a subdirectory on our site. I have created a controller and model which handles the data. I can access the URL fine in my browser and it works (however I will be restricting this soon). My problem is, how can I test if it's working? I've added this to my controller for a quick and dirty log $file = 'test.txt'; $contents = ''; if (file_exists($file)) { $contents = file_get_contents($file); } $contents .= date('m-d-Y') . ' --- ' . PHP_SAPI . "\n\n"; file_put_contents($file, $contents); But so far only got requests logged from myself from the browser, despite having my CRON running ever minute. 03-18-2010 --- cgi-fcgi 03-18-2010 --- cgi-fcgi I've set it up using cPanel with the command index.php properties/update/ the 2nd portion is what I use to access the page in my browser. So how can I test this is working properly, and have I stuffed anything up? Note: I'm using Kohana 3. Many thanks

    Read the article

  • How to test for existence of a script-scoped variable in PowerShell?

    - by Damian Powell
    Is it possible to test for the existence of a script-scoped variable in PowerShell? I've been using the PowerShell Community Extensions (PSCX) but I've noticed that if you import the module while Set-PSDebug -Strict is set, an error is produced: The variable '$SCRIPT:helpCache' cannot be retrieved because it has not been set. At C:\Users\...\Modules\Pscx\Modules\GetHelp\Pscx.GetHelp.psm1:5 char:24 While investigating how I might fix this, I found this piece of code in Pscx.GetHelp.psm1: #requires -version 2.0 param([string[]]$PreCacheList) if ((!$SCRIPT:helpCache) -or $RefreshCache) { $SCRIPT:helpCache = @{} } This is pretty straight forward code; if the cache doesn't exist or needs to be refreshed, create a new, empty cache. The problem is that calling $SCRIPT:helpCache while Set-PSDebug -Strict is in force casues the error because the variable hasn't been defined yet. Ideally, we could use a Test-Variable cmdlet but such a thing doesn't exist! I thought about looking in the variable: provider but I don't know how to determine the scope of a variable. So my question is: how can I test for the existence of a variable while Set-PSDebug -Strict is in force, without causing an error?

    Read the article

  • How do you test a new email filtering system?

    - by Zoredache
    What method do you use to test or evaluate potential new email filtering systems before you set it up on your production network? I am particularly interested in methods that are appropriate for small/medium sized organizations with a single mail server without the resources to build a duplicate of their email system.

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • Why does this asp.net mvc unit test fail?

    - by Brian McCord
    I have this unit test: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Here are the two controller actions that handle deleting: // // GET: /State/Delete/5 public ActionResult Delete(int id) { var x = _stateService.GetById( id ); return View(x); } // // POST: /State/Delete/5 [HttpPost] public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } What I can't figure out is why this test fails. I have verified that the record actually gets deleted from the list. If I set a break point in the Delete method on the line: var x = _stateService.GetById( id ); The GetById does indeed return a null just as it should, but when it gets back to the newresult variable in the test, the ViewData.Model is the deleted model. What am I doing wrong?

    Read the article

  • Can ScalaCheck/Specs warnings safely be ignored when using SBT with ScalaTest?

    - by pdbartlett
    I have a simple FunSuite-based ScalaTest: package pdbartlett.hello_sbt import org.scalatest.FunSuite class SanityTest extends FunSuite { test("a simple test") { assert(true) } test("a very slightly more complicated test - purposely fails") { assert(42 === (6 * 9)) } } Which I'm running with the following SBT project config: import sbt._ class HelloSbtProject(info: ProjectInfo) extends DefaultProject(info) { // Dummy action, just to show config working OK. lazy val solveQ = task { println("42"); None } // Managed dependencies val scalatest = "org.scalatest" % "scalatest" % "1.0" % "test" } However, when I runsbt test I get the following warnings: ... [info] == test-compile == [info] Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed. [info] Compiling test sources... [info] Nothing to compile. [warn] Could not load superclass 'org.scalacheck.Properties' : java.lang.ClassNotFoundException: org.scalacheck.Properties [warn] Could not load superclass 'org.specs.Specification' : java.lang.ClassNotFoundException: org.specs.Specification [warn] Could not load superclass 'org.specs.Specification' : java.lang.ClassNotFoundException: org.specs.Specification [info] Post-analysis: 3 classes. [info] == test-compile == For the moment I'm assuming these are just "noise" (caused by the unified test interface?) and that I can safely ignore them. But it is slightly annoying to some inner OCD part of me (though not so annoying that I'm prepared to add dependencies for the other frameworks). Is this a correct assumption, or are there subtle errors in my test/config code? If it is safe to ignore, is there any other way to suppress these errors, or do people routinely include all three frameworks so they can pick and choose the best approach for different tests? TIA, Paul. (ADDED: scala v2.7.7 and sbt v0.7.4)

    Read the article

  • fHow can I get Opera speed-dial and password management features in other browsers?

    - by Howard Guo
    I heavily rely on Opera's speed dial and password management features. Lack of these two features is really stopping me from switching to another web browser such as Chrome or Firefox. Opera's password management has two unique characteristics which I rely on heavily: It saves passwords on all pages, (apparently) despite the page's meta data asking not to save passwords. It offers keyboard shortcut and button to automatically fill in username/passwords and all other fields in a login form, then automatically submit the form. How can I get those functions in other browsers? Thank you!

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • jquery append() method on empty XML element

    - by Anthony
    This could just be a syntax error, but I'm trying to create a Document object from scratch, starting with document.implementation.createDocument() and then using jquery's append() method to add the elements. But it's not appending: var myDoc = document.implementation.createDocument("", 'stuff', null); $("stuff",myDoc).attr("test","tested"); $("stuff",myDoc).append("<test>A</test>"); $("<test>B</test>").appendTo("stuff",soapEnv); var s = new XMLSerializer(); alert(s.serializeToString(soapEnv)); This should output: <stuff test="tested"> <test>A</test> <test>B</test> </stuff> But instead it outputs: <stuff test="tested" /> So the selector seems to be working, just not the method. My only guess is the method doesn't account for the fact that elements are empty (<stuff />) until they have children. But that's just a guess.

    Read the article

  • How to test if a user has SELECTED a file to upload ?

    - by Tristan
    Hello, on a page, i have : if (!empty($_FILES['logo']['name'])) { $dossier = 'upload/'; $fichier = basename($_FILES['logo']['name']); $taille_maxi = 100000; $taille = filesize($_FILES['logo']['tmp_name']); $extensions = array('.png', '.jpg', '.jpeg'); $extension = strrchr($_FILES['logo']['name'], '.'); if(!in_array($extension, $extensions)) { $erreur = 'ERROR you must upload the right type'; } if($taille>$taille_maxi) { $erreur = 'too heavy'; } if(!empty($erreur)) { ....................... } The problem is, if the users wants to edit information WITHOUT uploading a LOGO, it raises an error : 'error you must upload the right type' So, if a user didn't put anything in the inputbox in order to upload it, i don't want to enter in these conditions test. i tested : if (!empty($_FILES['logo']['name']) and if (isset($_FILES['logo']['name']) but both doesn't seems to work. Any ideas? edit : maybe i wasn't so clear, i don't want to test if he uploaded a logo, i want to test IF he selected a file to upload, because right now, if he doesn't select a file to upload, php raises an error telling he must upload with the right format. thanks.

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Python ctypes in_dll string assignment

    - by ackdesha
    I could use some help assigning to a global C variable in DLL using ctypes. The following is an example of what I'm trying: test.c contains the following #include <stdio.h> char name[60]; void test(void) { printf("Name is %s\n", name); } On windows (cygwin) I build a DLL (Test.dll) as follows: gcc -g -c -Wall test.c gcc -Wall -mrtd -mno-cygwin -shared -W1,--add-stdcall-alias -o Test.dll test.o When trying to modify the name variable and then calling the C test function using the ctypes interface I get the following... >>> from ctypes import * >>> dll = windll.Test >>> dll <WinDLL 'Test', handle ... at ...> >>> f = c_char_p.in_dll(dll, 'name') >>> f c_char_p(None) >>> f.value = 'foo' >>> f c_char_p('foo') >>> dll.test() Name is Name is 48+? 13 Why does the test function print garbage in this case?

    Read the article

  • How do I setup model associations in an RSpec test?

    - by Eric M.
    I've pastied the specs I've written for the posts/show.html.erb view in an application I'm writing as a means to learn RSpec. I am still learning about mocks and stubbing. This question is specific to the "should list all related comments" spec. What I want is to test that the show view displays a post's comments. But what I'm not sure about is how to setup this test and then have the test iterate through with should contain('xyz') statements. Any hints? Other suggestions are also appreciated! Thanks. ---Edit Some more information. I have a named_scope applied to comments in my view (I know, I did this a bit backwards in this case), so @post.comments.approved_is(true). The code pastied responds with the error "undefined method `approved_is' for #", which makes sense since I told it stub comments and return a comment. I'm still not sure, however, how to chain the stubs so that @post.comments.approved_is(true) will return an array of comments.

    Read the article

  • What are good examples of perfectly acceptable approaches to development that are NOT test driven development (TDD)?

    - by markbruns
    The TDD cycle is test, code, refactor, (repeat) and then ship. TDD implies development that is driven by testing, specifically that means understanding requirements and then writing tests first before developing or writing code. My natural inclination is a philosophical bias in favor of TDD; I would like to be convinced that there are other approaches that now work well or even better than TDD so I have asked this question. What are examples of perfectly acceptable approaches that NOT test driven development? I can think of plenty approaches that are not TDD but could be a lot more trouble than what they are worth ... it's not moral judgement, it's just that they are cost more than they are worth ... the following are simply examples of things that might be ok as learning exercises, but approaches I'd find to be NOT acceptable in serious production and NOT TDD might include: Inspecting quality into your product -- Focusing efforts on developing a proficiency in testing/QA can be problematic, especially if you don't work on the requirements and development side first ... symptom of this include bug triaging where the developers have so many different bugs to deal with it, it is necessary to employ a form of triage -- each development cycle gets worse and worse, programmers work more and more hours, sleep less and less, struggle to keep going in death march until they are consumed. Superstition ... believing in things that you don't understand -- this would involve borrowing code that you believe has been proven or tested from somewhere, e.g. legacy code, a magic code starter wizard or an open source project, and you go forward hacking up a storm of modifications, sliding FaceBook Connect into your the user interface, inventing some new magic features on the fly (e.g. a mashup using the Twitter API, GoogleMaps API and maybe Zappos API), showing off your cool new "product" to a few people and then writing up a simple "specification" and list of "test cases" and turning that over to Mechanical Turk for testing.

    Read the article

  • Can't declare an abstract method private....

    - by Zombies
    I want to do this, yet I can't. Here is my scenario and rational. I have an abstract class for test cases that has an abstract method called test(). The test() method is to be defined by the subclass; it is to be implemented with logic for a certain application, such as CRMAppTestCase extends CompanyTestCase. I don't want the test() method to be invoked directly, I want the super class to call the test() method while the sub class can call a method which calls this (and does other work too, such as setting a current date-time right before the test is executed for example). Example code: public abstract class CompanyTestCase { //I wish this would compile, but it cannot be declared private private abstract void test(); public TestCaseResult performTest() { //do some work which must be done and should be invoked whenever //this method is called (it would be improper to expect the caller // to perform initialization) TestCaseResult result = new TestCaseResult(); result.setBeginTime(new Date()); long time = System.currentTimeMillis(); test(); //invoke test logic result.setDuration(System.currentTimeMillis() - time); return result; } } Then to extend this.... public class CRMAppTestCase extends CompanyTestCase { public void test() { //test logic here } } Then to call it.... TestCaseResult result = new CRMAppTestCase().performTest();

    Read the article

  • Bugzilla not sending emails, even to the test file?

    - by donutdan4114
    I have installed and setup Bugzilla3 for my domain. Everything is working properly except for the email functionality. The server uses Postfix, and that works for my PHP application, and command line. In Bugzilla, I have tried setting the mail_delivery_method to 'test', and nothing shows up in data/mailer.testfile, it is completely blank... I have no idea where to go from here, any ideas on what to try next?

    Read the article

  • AIX Grid Control 10.2.0.5 Communication and Monitoring Issue since 31-DEC-2010

    - by jayatheertha.rao(at)oracle.com
    Detailed symptoms for Oracle Management Server (OMS) 10.2.0.5 on AIX Oracle Management Service 10.2.0.5 instances on AIX 5L remain active and functional, but the OMS instances fail to communicate with the Grid Control Management Agents.An SSLPeerUnverified exception will be reported in the file $ORACLE_HOME/sysman/log/emoms.trc when OMS attempts to connect with an Agent:Javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat com.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificateChain(DashoA12275)at oracle.sysman.emSDK.emd.comm.EMDClient.authenticateHTTPConnection(EMDClient.java:2002)at oracle.sysman.emSDK.emd.comm.EMDClient.getConnection(EMDClient.java:1877)at oracle.sysman.emSDK.emd.comm.EMDClient.getConnection(EMDClient.java:1810)at oracle.sysman.emSDK.emd.comm.EMDClient.verifyHttpConnection(EMDClient.java:2540)at oracle.sysman.emSDK.emd.comm.EMDClient.getResponseForRequest(EMDClient.java:2323)at oracle.sysman.emSDK.emd.comm.EMDClient.getUploadManagerStatus(EMDClient.java:4853)at oracle.sysman.eml.admin.rep.emdConfig.EmdConfigTargetsData.getEmdUploadData(EmdConfigTargetsData.java:1640)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)This error may be reported when:- Accessing the Agent home page in Grid Control- Setting preferred credentials for a target monitored by the Agent- Managing metrics for a target monitored by the Agent The jobs scheduled to be run by Agents can become non-responsiveThe OMS log file $ORACLE_HOME/sysman/log/emoms.trc can show:2010-12-31 00:06:58,204 [JobWorker 430:Thread-34] DEBUG emSDK.comm getStreamResponse.4015 - oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedoracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat oracle.sysman.emSDK.emd.comm.EMDClient.getStreamResponse_(EMDClient.java:4088)at oracle.sysman.emSDK.emd.comm.EMDClient.getStreamResponse(EMDClient.java:4009)at oracle.sysman.emSDK.emd.comm.EMDClient.remoteOperation(EMDClient.java:3404)at oracle.sysman.emdrep.jobs.CommandManager.requestRemoteCommand(CommandManager.java:765)at oracle.sysman.emdrep.jobs.commands.RemoteOp.executeCommand(RemoteOp.java:434)at oracle.sysman.emdrep.jobs.commands.RemoteOp.executeCommand(RemoteOp.java:491)at oracle.sysman.emdrep.jobs.BaseJobWorker.runStep(BaseJobWorker.java:614)at oracle.sysman.emdrep.jobs.BaseJobWorker.doOneOperation(BaseJobWorker.java:738)at oracle.sysman.emdrep.jobs.JobWorker.doOneOperation(JobWorker.java:306)at oracle.sysman.emdrep.jobs.JobWorker.run(JobWorker.java:288)at oracle.sysman.util.threadPoolManager.WorkerThread.run(Worker.java:261) Detailed symptoms for Grid Control Management Agent 10.2.0.5 on AIX Beginning 31-DEC-2010 00:00:00, 10.2.0.5 Management Agents running on the AIX 5L operating system will fail to monitor Oracle Application Server targets. As a result, the Availability Status for the Oracle Application Server targets will be in the "Metric Error" state. NOTE: The 10.2.0.5.0 Agents would experience these errors regardless of the version/platform of the OMS.The following metric error is seen in the console for the Oracle Application Server targets monitored by a Grid Control Management Agent 10.2.0.5 installed on AIX and experiencing a Root Certificate Authority issue:Message oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated The Grid Control Management Agent log file $ORACLE_HOME/sysman/log/emagentfetchlet.log (or $ORACLE_HOME/hostname/sysman/log/emagentfetchlet.log for a clustered Agent) includes the following errors:2010-12-31 00:01:03,626 [nmefmgr_getJNIFetchlet] ERROR ias.ResponseMetric getResponseMetric.154 - Unable tocompute application server statusoracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat oracle.sysman.ias.ias.ResponseMetric.getResponseMetric(ResponseMetric.java:108)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:618)at oracle.sysman.emd.fetchlets.JavaWrapperFetchlet.getMetric(JavaWrapperFetchlet.java:217)at oracle.sysman.emd.fetchlets.FetchletWrapper.getMetric(FetchletWrapper.java:382) Beginning 31-DEC-2010, 10.2.0.5 Management Agents on the AIX 5L platform will fail to secure or re-secure with Oracle Management Service (OMS). This failure will cause installation of 10.2.0.5 Agents on the AIX 5L platform to fail.NOTE: The 10.2.0.5.0 Agents would experience these errors regardless of the version/platform of the OMS.The "emctl secure agent" command will fail with the following error, which will be written to the $ORACLE_HOME/sysman/log/secure.log file (or $ORACLE_HOME/hostname/sysman/log/secure.log for a clustered Agent) :2011-01-03 21:06:11,941 [main] ERROR agent.SecureAgentCmd main.207 - Failedto secure the Agent:javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedatcom.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificateChain(DashoA6275)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.checkUpload(SecureAgentCmd.java:478)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.secureAgent(SecureAgentCmd.java:249)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.main(SecureAgentCmd.java:200)  For solution, refer to AIX Grid Control 10.2.0.5 SSL Communication and Monitoring Issue since 31-DEC-2010 (Doc ID 1275070.1)

    Read the article

  • Delegation of Solaris Zone Administration

    - by darrenm
    In Solaris 11 'Zone Delegation' is a built in feature. The Zones system now uses finegrained RBAC authorisations to allow delegation of management of distinct zones, rather than all zones which is what the 'Zone Management' RBAC profile did in Solaris 10.The data for this can be stored with the Zone or you could also create RBAC profiles (that can even be stored in NIS or LDAP) for granting access to specific lists of Zones to administrators.For example lets say we have zones named zoneA through zoneF and we have three admins alice, bob, carl.  We want to grant a subset of the zone management to each of them.We could do that either by adding the admin resource to the appropriate zones via zonecfg(1M) or we could do something like this with RBAC data directly: First lets look at an example of storing the data with the zone. # zonecfg -z zoneA zonecfg:zoneA> add admin zonecfg:zoneA> set user=alice zonecfg:zoneA> set auths=manage zonecfg:zoneA> end zonecfg:zoneA> commit zonecfg:zoneA> exit Now lets look at the alternate method of storing this directly in the RBAC database, but we will show all our admins and zones for this example: # usermod -P +Zone Management -A +solaris.zone.manage/zoneA alice # usermod -A +solaris.zone.login/zoneB alice # usermod -P +Zone Management-A +solaris.zone.manage/zoneB bob # usermod -A +solaris.zone.manage/zoneC bob # usermod -P +Zone Management-A +solaris.zone.manage/zoneC carl # usermod -A +solaris.zone.manage/zoneD carl # usermod -A +solaris.zone.manage/zoneE carl # usermod -A +solaris.zone.manage/zoneF carl In the above alice can only manage zoneA, bob can manage zoneB and zoneC and carl can manage zoneC through zoneF.  The user alice can also login on the console to zoneB but she can't do the operations that require the solaris.zone.manage authorisation on it.Or if you have a large number of zones and/or admins or you just want to provide a layer of abstraction you can collect the authorisation lists into an RBAC profile and grant that to the admins, for example lets great an RBAC profile for the things that alice and carl can do. # profiles -p 'Zone Group 1' profiles:Zone Group 1> set desc="Zone Group 1" profiles:Zone Group 1> add profile="Zone Management" profiles:Zone Group 1> add auths=solaris.zone.manage/zoneA profiles:Zone Group 1> add auths=solaris.zone.login/zoneB profiles:Zone Group 1> commit profiles:Zone Group 1> exit # profiles -p 'Zone Group 3' profiles:Zone Group 1> set desc="Zone Group 3" profiles:Zone Group 1> add profile="Zone Management" profiles:Zone Group 1> add auths=solaris.zone.manage/zoneD profiles:Zone Group 1> add auths=solaris.zone.manage/zoneE profiles:Zone Group 1> add auths=solaris.zone.manage/zoneF profiles:Zone Group 1> commit profiles:Zone Group 1> exit Now instead of granting carl  and aliace the 'Zone Management' profile and the authorisations directly we can just give them the appropriate profile. # usermod -P +'Zone Group 3' carl # usermod -P +'Zone Group 1' alice If we wanted to store the profile data and the profiles granted to the users in LDAP just add '-S ldap' to the profiles and usermod commands. For a documentation overview see the description of the "admin" resource in zonecfg(1M), profiles(1) and usermod(1M)

    Read the article

  • List of available whitepapers as at 04 May 2010

    - by Anthony Shorten
    The following table lists the whitepapers available, from My Oracle Support, for any Oracle Utilities Application Framework based product: KB Id Document Title Contents 559880.1 ConfigLab Design Guidelines Whitepaper outlining how to design and implement a ConfigLab solution. 560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers.  560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items.  Release Management - Packaging configuration items into a release.  Distribution - Distribution and installation of  releases across environments  Change Management - Generic change management processes for product implementations. Status Accounting -Status reporting techniques using product facilities.  Defect Management -Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview Whitepaper summarizing the security facilities in the framework. Updated for OUAF 4.0.1 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Whitepaper summarizing how to integrate an external LDAP based security repository with the framework.  789060.1 Oracle Utilities Application Framework Integration Overview Whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products Whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper oulines the common and best practices implemented by sites all over the world. Updated for OUAF 4.0.1 856854.1 Technical Best Practices V1 Addendum  Addendum to Technical Best Practices for Oracle Utilities Application Framework Based Products containing only V1.x specific advice. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. Updated for OUAF 4.0.1 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Orade Identity Manager used to provision user and user group secuity information 1068958.1 Production Environment Configuration Guidelines (New!) Whitepaper outlining common production level settings for the products

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >