Search Results

Search found 9952 results on 399 pages for 'more details'.

Page 33/399 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • Please Help - PHP Form, when no text is entered [migrated]

    - by Joe Turner
    I'm creating a mobile landing page and I have also created a form that allows me to create more, by duplicating a folder that's host to a template file. The script then takes you to a page where you input the company details one by one and press submit. Then the page is created. My problem is, when a field is left out (YouTube for instance), the button is created and is blank. I would like there to be a default text for when there is no text. I've tried a few things and have been struggling to make this work for DAYS! <?php $company = $_POST["company"]; $phone = $_POST["phone"]; $colour = $_POST["colour"]; $email = $_POST["email"]; $website = $_POST["website"]; $video = $_POST["video"]; ?> <div id="contact-area"> <form method="post" action="generate.php"><br> <input type="text" name="company" placeholder="Company Name" /><br> <input type="text" name="slogan" placeholder="Slogan" /><br> <input class="color {required:false}" name="colour" placeholder="Company Colour"><br> <input type="text" name="phone" placeholder="Phone Number" /><br> <input type="text" name="email" placeholder="Email Address" /><br> <input type="text" name="website" placeholder="Full Website - Include http://" /><br> <input type="text" name="video" placeholder="Video URL" /><br> <input type="submit" value="Generate QuickLinks" style="background:url(images/submit.png) repeat-x; color:#FFF"/> </form> That's the form. It takes the variables and post's them to the file below. <?php $File = "includes/details.php"; $Handle = fopen($File, 'w'); ?> <?php $File = "includes/details.php"; $Handle = fopen($File, 'w'); $Data = "<div id='logo'> <h1 style='color:#$_POST[colour]'>$_POST[company]</h1> <h2>$_POST[slogan]</h2> </div> <ul data-role='listview' data-inset='true' data-theme='b'> <li style='background-color:#$_POST[colour]'><a href='tel:$_POST[phone]'>Phone Us</a></li> <li style='background-color:#$_POST[colour]'><a href='mailto:$_POST[email]'>Email Us</a></li> <li style='background-color:#$_POST[colour]'><a href='$_POST[website]'>View Full Website</a></li> <li style='background-color:#$_POST[colour]'><a href='$_POST[video]'>Watch Us</a></li> </ul> \n"; fwrite($Handle, $Data); fclose($Handle); ?> and there is what the form turns into. I need there to be a default link put in incase the field is left blank, witch it is sometimes. Thanks in advance guys.

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • WLS MBeans

    - by Jani Rautiainen
    WLS provides a set of Managed Beans (MBeans) to configure, monitor and manage WLS resources. We can use the WLS MBeans to automate some of the tasks related to the configuration and maintenance of the WLS instance. The MBeans can be accessed a number of ways; using various UIs and programmatically using Java or WLST Python scripts.For customization development we can use the features to e.g. manage the deployed customization in MDS, control logging levels, automate deployment of dependent libraries etc. This article is an introduction on how to access and use the WLS MBeans. The goal is to illustrate the various access methods in a single article; the details of the features are left to the linked documentation.This article covers Windows based environment, steps for Linux would be similar however there would be some differences e.g. on how the file paths are defined. MBeansThe WLS MBeans can be categorized to runtime and configuration MBeans.The Runtime MBeans can be used to access the runtime information about the server and its resources. The data from runtime beans is only available while the server is running. The runtime beans can be used to e.g. check the state of the server or deployment.The Configuration MBeans contain information about the configuration of servers and resources. The configuration of the domain is stored in the config.xml file and the configuration MBeans can be used to access and modify the configuration data. For more information on the WLS MBeans refer to: Understanding WebLogic Server MBeans WLS MBean reference Java Management Extensions (JMX)We can use JMX APIs to access the WLS MBeans. This allows us to create Java programs to configure, monitor, and manage WLS resources. In order to use the WLS MBeans we need to add the following library into the class-path: WL_HOME\lib\wljmxclient.jar Connecting to a WLS MBean server The WLS MBeans are contained in a Mbean server, depending on the requirement we can connect to (MBean Server / JNDI Name): Domain Runtime MBean Server weblogic.management.mbeanservers.domainruntime Runtime MBean Server weblogic.management.mbeanservers.runtime Edit MBean Server weblogic.management.mbeanservers.edit To connect to the WLS MBean server first we need to create a map containing the credentials; Hashtable<String, String> param = new Hashtable<String, String>(); param.put(Context.SECURITY_PRINCIPAL, "weblogic");        param.put(Context.SECURITY_CREDENTIALS, "weblogic1");        param.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); These define the user, password and package containing the protocol. Next we create the connection: JMXServiceURL serviceURL =     new JMXServiceURL("t3","127.0.0.1",7101,     "/jndi/weblogic.management.mbeanservers.domainruntime"); JMXConnector connector = JMXConnectorFactory.connect(serviceURL, param); MBeanServerConnection connection = connector.getMBeanServerConnection(); With the connection we can now access the MBeans for the WLS instance. For a complete example see Appendix A of this post. For more details refer to Accessing WebLogic Server MBeans with JMX Accessing WLS MBeans The WLS MBeans are structured hierarchically; in order to access content we need to know the path to the MBean we are interested in. The MBean is accessed using “MBeanServerConnection. getAttribute” API.  WLS provides entry points to the hierarchy allowing us to navigate all the WLS MBeans in the hierarchy (MBean Server / JMX object name): Domain Runtime MBean Server com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean Edit MBean Server com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean For example we can access the Domain Runtime MBean using: ObjectName service = new ObjectName( "com.bea:Name=DomainRuntimeService," + "Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean"); Same syntax works for any “child” WLS MBeans e.g. to find out all application deployments we can: ObjectName domainConfig = (ObjectName)connection.getAttribute(service,"DomainConfiguration"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); Alternatively we could access the same MBean using the full syntax: ObjectName domainConfig = new ObjectName("com.bea:Location=DefaultDomain,Name=DefaultDomain,Type=Domain"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); For more details refer to Accessing WebLogic Server MBeans with JMX Invoking operations on WLS MBeans The WLS MBean operations can be invoked with MBeanServerConnection. invoke API; in the following example we query the state of “AppsLoggerService” application: ObjectName appRuntimeStateRuntime = new ObjectName("com.bea:Name=AppRuntimeStateRuntime,Type=AppRuntimeStateRuntime"); Object[] parameters = { "AppsLoggerService", "DefaultServer" }; String[] signature = { "java.lang.String", "java.lang.String" }; String result = (String)connection.invoke(appRuntimeStateRuntime,"getCurrentState",parameters, signature); The result returned should be "STATE_ACTIVE" assuming the "AppsLoggerService" application is up and running. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that we can access the same WLS MBeans. The tool is located under: $MW_HOME\oracle_common\common\bin\wlst.bat Do note that there are several instances of the wlst script under the $MW_HOME, each of them works, however the commands available vary, so we want to use the one under “oracle_common”. The tool is started in offline mode. In offline mode we can access and manipulate the domain configuration. In online mode we can access the runtime information. We connect to the Administration Server : connect("weblogic","weblogic1", "t3://127.0.0.1:7101") In both online and offline modes we can navigate the WLS MBean using commands like "ls" to print content and "cd" to navigate between objects, for example: All the commands available can be obtained with: help('all') For details of the tool refer to WebLogic Scripting Tool and for the commands available WLST Command and Variable Reference. Also do note that the WLST tool can be invoked from Java code in Embedded Mode. Running Scripts The WLST tool allows us to automate tasks using Python scripts in Script Mode. The script can be manually created or recorded by the WLST tool. Example commands of recording a script: startRecording("c:/temp/recording.py") <commands that we want to record> stopRecording() We can run the script from WLST: execfile("c:/temp/recording.py") We can also run the script from the command line: C:\apps\Oracle\Middleware\oracle_common\common\bin\wlst.cmd c:/temp/recording.py There are various sample scripts are provided with the WLS instance. UI to Access the WLS MBeans There are various UIs through which we can access the WLS MBeans. Oracle Enterprise Manager Fusion Middleware Control Oracle WebLogic Server Administration Console Fusion Middleware Control MBean Browser In the integrated JDeveloper environment only the Oracle WebLogic Server Administration Console is available to us. For more information refer to the documentation, one noteworthy feature in the console is the ability to record WLST scripts based on the navigation. In addition to the UIs above the JConsole included in the JDK can be used to access the WLS MBeans. The JConsole needs to be started with specific parameter to force WLS objects to be used and jar files in the classpath: "C:\apps\Oracle\Middleware\jdk160_24\bin\jconsole" -J-Djava.class.path=C:\apps\Oracle\Middleware\jdk160_24\lib\jconsole.jar;C:\apps\Oracle\Middleware\jdk160_24\lib\tools.jar;C:\apps\Oracle\Middleware\wlserver_10.3\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote For more details refer to the Accessing Custom MBeans from JConsole. Summary In this article we have covered various ways we can access and use the WLS MBeans in context of integrated WLS in JDeveloper to be used for Fusion Application customization development. References Developing Custom Management Utilities With JMX for Oracle WebLogic Server Accessing WebLogic Server MBeans with JMX WebLogic Server MBean Reference WebLogic Scripting Tool WLST Command and Variable Reference Appendix A package oracle.apps.test; import java.io.IOException;import java.net.MalformedURLException;import java.util.Hashtable;import javax.management.MBeanServerConnection;import javax.management.MalformedObjectNameException;import javax.management.ObjectName;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.naming.Context;/** * This class contains simple examples on how to access WLS MBeans using JMX. */public class BlogExample {    /**     * Connection to the WLS MBeans     */    private MBeanServerConnection connection;    /**     * Constructor that takes in the connection information for the      * domain and obtains the resources from WLS MBeans using JMX.     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     */    public BlogExample(String hostName, String port, String userName,                       String password) {        super();        try {            initConnection(hostName, port, userName, password);        } catch (Exception e) {            throw new RuntimeException("Unable to connect to the domain " +                                       hostName + ":" + port);        }    }    /**     * Default constructor.     * Tries to create connection with default values. Runtime exception will be     * thrown if the default values are not used in the local instance.     */    public BlogExample() {        this("127.0.0.1", "7101", "weblogic", "weblogic1");    }    /**     * Initializes the JMX connection to the WLS Beans     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     * @throws IOException error connecting to the WLS MBeans     * @throws MalformedURLException error connecting to the WLS MBeans     * @throws MalformedObjectNameException error connecting to the WLS MBeans     */    private void initConnection(String hostName, String port, String userName,                                String password)                                 throws IOException, MalformedURLException,                                        MalformedObjectNameException {        String protocol = "t3";        String jndiroot = "/jndi/";        String mserver = "weblogic.management.mbeanservers.domainruntime";        JMXServiceURL serviceURL =            new JMXServiceURL(protocol, hostName, Integer.valueOf(port),                              jndiroot + mserver);        Hashtable<String, String> h = new Hashtable<String, String>();        h.put(Context.SECURITY_PRINCIPAL, userName);        h.put(Context.SECURITY_CREDENTIALS, password);        h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,              "weblogic.management.remote");        JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);        connection = connector.getMBeanServerConnection();    }    /**     * Main method used to invoke the logic for testing     * @param args arguments passed to the program     */    public static void main(String[] args) {        BlogExample blogExample = new BlogExample();        blogExample.testEntryPoint();        blogExample.testDirectAccess();        blogExample.testInvokeOperation();    }    /**     * Example of using an entry point to navigate the WLS MBean hierarchy.     */    public void testEntryPoint() {        try {            System.out.println("testEntryPoint");            ObjectName service =             new ObjectName("com.bea:Name=DomainRuntimeService,Type=" +"weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");            ObjectName domainConfig =                (ObjectName)connection.getAttribute(service,                                                    "DomainConfiguration");            ObjectName[] appDeployments =                (ObjectName[])connection.getAttribute(domainConfig,                                                      "AppDeployments");            for (ObjectName appDeployment : appDeployments) {                String resourceIdentifier =                    (String)connection.getAttribute(appDeployment,                                                    "SourcePath");                System.out.println(resourceIdentifier);            }        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of accessing WLS MBean directly with a full reference.     * This does the same thing as testEntryPoint in slightly difference way.     */    public void testDirectAccess() {        try {            System.out.println("testDirectAccess");            ObjectName appDeployment =                new ObjectName("com.bea:Location=DefaultDomain,"+                               "Name=AppsLoggerService,Type=AppDeployment");            String resourceIdentifier =                (String)connection.getAttribute(appDeployment, "SourcePath");            System.out.println(resourceIdentifier);        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of invoking operation on a WLS MBean.     */    public void testInvokeOperation() {        try {            System.out.println("testInvokeOperation");            ObjectName appRuntimeStateRuntime =                new ObjectName("com.bea:Name=AppRuntimeStateRuntime,"+                               "Type=AppRuntimeStateRuntime");            String identifier = "AppsLoggerService";            String serverName = "DefaultServer";            Object[] parameters = { identifier, serverName };            String[] signature = { "java.lang.String", "java.lang.String" };            String result =                (String)connection.invoke(appRuntimeStateRuntime, "getCurrentState",                                          parameters, signature);            System.out.println("State of " + identifier + " = " + result);        } catch (Exception e) {            throw new RuntimeException(e);        }    }}

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • C# Lack of Static Inheritance - What Should I Do?

    - by yellowblood
    Alright, so as you probably know, static inheritance is impossible in C#. I understand that, however I'm stuck with the development of my program. I will try to make it as simple as possible. Lets say our code needs to manage objects that are presenting aircrafts in some airport. The requirements are as follows: There are members and methods that are shared for all aircrafts There are many types of aircrafts, each type may have its own extra methods and members. There can be many instances for each aircraft type. Every aircraft type must have a friendly name for this type, and more details about this type. For example a class named F16 will have a static member FriendlyName with the value of "Lockheed Martin F-16 Fighting Falcon". Other programmers should be able to add more aircrafts, although they must be enforced to create the same static details about the types of the aircrafts. In some GUI, there should be a way to let the user see the list of available types (with the details such as FriendlyName) and add or remove instances of the aircrafts, saved, lets say, to some XML file. So, basically, if I could enforce inherited classes to implement static members and methods, I would enforce the aircraft types to have static members such as FriendlyName. Sadly I cannot do that. So, what would be the best design for this scenario?

    Read the article

  • warning C4996: 'getch': The POSIX name for this item is deprecated

    - by Maziah Mohd Zaki
    1c:\users\user\documents\visual studio 2008\projects\bengkel\bengkel\login.cpp(196) : warning C4996: 'getch': The POSIX name for this item is deprecated. Instead, use the ISO C++ conformant name: _getch. See online help for details. 1 c:\program files\microsoft visual studio 9.0\vc\include\conio.h(145) : see declaration of 'getch' 1c:\users\user\documents\visual studio 2008\projects\bengkel\bengkel\login.cpp(199) : warning C4996: 'getch': The POSIX name for this item is deprecated. Instead, use the ISO C++ conformant name: _getch. See online help for details. 1 c:\program files\microsoft visual studio 9.0\vc\include\conio.h(145) : see declaration of 'getch' 1c:\users\user\documents\visual studio 2008\projects\bengkel\bengkel\login.cpp(203) : warning C4065: switch statement contains 'default' but no 'case' labels 1c:\users\user\documents\visual studio 2008\projects\bengkel\bengkel\login.cpp(225) : warning C4996: 'strcpy': This function or variable may be unsafe. Consider using strcpy_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. 1 c:\program files\microsoft visual studio 9.0\vc\include\string.h(74) : see declaration of 'strcpy'

    Read the article

  • When I use a form to call a method on the controller, I want the page to refresh, and the url to sho

    - by MedicineMan
    Using ASP MVC, I have set up a webpage for localhost/Dinner/100 to show the dinner details for dinner with ID = 100. On the page, there is a dropdown that shows Dinner 1, Dinner 2, etc. The user should select the dinner of interest (Dinner 2, ID = 102) off the form and press submit. The page should refresh and show the url: localhost/Dinner/102, and show the details of dinner 2. My code is working except for the url. During this, my url shows localhost/Dinner/100 even though it is correctly displaying the details of Dinner 2 (ID = 102). My controller method is pretty simple: public ActionResult Index(string id) { int Id = 0; if (!IsValidFacilityId(id) || !int.TryParse(id, out Id)) { return Redirect("/"); } return View(CreateViewModel(Id)); } can you help me figure out how to get this all working? p.s. I did create a custom route for the method: routes.MapRoute( "DinnerDefault", // Route name "Dinner/{id}", // URL with parameters new { controller = "Dinner", action = "Index", id = "" } // Parameter defaults );

    Read the article

  • iPhone SQLite Password Field Encryption

    - by Louis Russell
    Good Afternoon Guys and Girls, Hopefully this will be a quick and easy question. I am building an App that requires the user to input their login details for an online service that it links to. Multiple login details can be added and saved as the user may have several accounts that they would like to switch between. These details will be stored in an SQLite database and will contain their passwords. Now the questions are: 1: Should these passwords be encrypted in the database? My instinct would say yes but then I do not know how secure the device and system is and if this is necessary. 2: If they should be encrypted what should I use? I think encrypting the whole database file sounds a bit over-kill so should I just encrypt the password before saving it to the database? If this is case what could I use to achieve this? I have found reference to a "crypt(3)" but am having trouble finding much about it or how to implement it. I eagerly await your replies!

    Read the article

  • Python interface to PayPal - urllib.urlencode non-ASCII characters failing

    - by krys
    I am trying to implement PayPal IPN functionality. The basic protocol is as such: The client is redirected from my site to PayPal's site to complete payment. He logs into his account, authorizes payment. PayPal calls a page on my server passing in details as POST. Details include a person's name, address, and payment info etc. I need to call a URL on PayPal's site internally from my processing page passing back all the params that were passed in abovem and an additional one called 'cmd' with a value of '_notify-validate'. When I try to urllib.urlencode the params which PayPal has sent to me, I get a: While calling send_response_to_paypal. Traceback (most recent call last): File "<snip>/account/paypal/views.py", line 108, in process_paypal_ipn verify_result = send_response_to_paypal(params) File "<snip>/account/paypal/views.py", line 41, in send_response_to_paypal params = urllib.urlencode(params) File "/usr/local/lib/python2.6/urllib.py", line 1261, in urlencode v = quote_plus(str(v)) UnicodeEncodeError: 'ascii' codec can't encode character u'\ufffd' in position 9: ordinal not in range(128) I understand that urlencode does ASCII encoding, and in certain cases, a user's contact info can contain non-ASCII characters. This is understandable. My question is, how do I encode non-ASCII characters for POSTing to a URL using urllib2.urlopen(req) (or other method) Details: I read the params in PayPal's original request as follows (the GET is for testing): def read_ipn_params(request): if request.POST: params= request.POST.copy() if "ipn_auth" in request.GET: params["ipn_auth"]=request.GET["ipn_auth"] return params else: return request.GET.copy() The code I use for sending back the request to PayPal from the processing page is: def send_response_to_paypal(params): params['cmd']='_notify-validate' params = urllib.urlencode(params) req = urllib2.Request(PAYPAL_API_WEBSITE, params) req.add_header("Content-type", "application/x-www-form-urlencoded") response = urllib2.urlopen(req) status = response.read() if not status == "VERIFIED": logging.warn("PayPal cannot verify IPN responses: " + status) return False return True Obviously, the problem only arises if someone's name or address or other field used for the PayPal payment does not fall into the ASCII range.

    Read the article

  • Newbie question about controllers in ASP.Net MVC.

    - by Sergio Tapia
    I'm following a tutorial on creating the NerdDinner using ASP.Net MVC. However, I'm using Visual Studio 2010 Ultimate edition and there was only MVC2 to choose from. So I've following the tutorial so far, and everything is really clicking and being really well explained, until this little hitch. The guide is asking me to create new methods on a Controller file like so: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace NerdDinner.Controllers { public class DinnersController : Controller { public void Index(){ Response.Write("<h1>Coming Soon: Dinners</h1>"); } public void Details(int id) { Response.Write("<h1>Details DinnerID: " + id + "</h1>"); } } } However, when I created the Controllers file, Visual Studio created an Index method already, but it looks vastly different to what the tutorial shows. Maybe this is the new way to do things using MVC2? using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace NerdDinner.Controllers { public class DinnersController : Controller { // // GET: /Dinners/ public ActionResult Index() { return View(); } } } My question is, how can I reproduce the Details and Index method (they're in MVC) to the MVC2 way? Is this even relevant? Thank you!

    Read the article

  • NSString to NSURL objects

    - by user337844
    Hello, I have an array that I populated with my plist file, which looks something like this: <array> <string>http://www.apple.com</string> <string>http://www.google.com</string> <string>http://www.amazon.com</string> </array> So, I'm looking to convert these NSString objects to NSURL objects when I call them in my didSelectRowAtIndexPath method. Any ideas? This is what I have right now: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSInteger row = [indexPath row]; if (self.viewController == nil) { TemplateViewController *details = [[TemplateViewController alloc] initWithNibName:@"TemplateViewController" bundle:nil]; self.viewController = details; [details release]; } NSString *tempURLString = [tieroneurlArray objectAtIndex:row]; NSURL *loadingUrl = [NSURL URLWithString:tempURLString]; [tieroneurlArray addObject:loadingUrl]; viewController.title = [NSString stringWithFormat:@"%@", [tieroneArray objectAtIndex:row]]; [self.viewController setTableURL:[tieroneurlArray objectAtIndex:row]]; TemplateAppAppDelegate *delegate = [[UIApplication sharedApplication] delegate]; [delegate.templateNavController pushViewController:viewController animated:YES]; } And this is how I load the array with the plist: - (void)viewDidLoad { [super viewDidLoad]; NSString *path = [[NSBundle mainBundle] pathForResource:@"tier1Names" ofType:@"plist"]; tieroneArray = [[NSMutableArray alloc] initWithContentsOfFile:path]; NSString *tableURL = [[NSBundle mainBundle] pathForResource:@"tier1URL" ofType:@"plist"]; tieroneurlArray = [[NSMutableArray alloc] initWithContentsOfFile:tableURL]; }

    Read the article

  • How do I set YUI2 paginator to select a page other than the first page?

    - by Jeremy Weathers
    I have a YUI DataTable (YUI 2.8.0r4) with AJAX pagination. Each row in the table links to a details/editing page and I want to link from that details page back to the list page that includes the record from the details page. So I have to a) offset the AJAX data correctly and b) tell YAHOO.widget.Paginator which page to select. According to my reading of the YUI API docs, I have to pass in the initialPage configuration option. I've attempted this, but it doesn't take (the data from AJAX is correctly offset, but the paginator thinks I'm on page 1, so clicking "next" takes me from e.g. page 6 to page 2. What am I not doing (or doing wrong)? Here's my DataTable building code: (function() { var columns = [ {key: "retailer", label: "Retailer", sortable: false, width: 80}, {key: "publisher", label: "Publisher", sortable: false, width: 300}, {key: "description", label: "Description", sortable: false, width: 300} ]; var source = new YAHOO.util.DataSource("/sales_data.json?"); source.responseType = YAHOO.util.DataSource.TYPE_JSON; source.responseSchema = { resultsList: "records", fields: [ {key: "url"}, {key: "retailer"}, {key: "publisher"}, {key: "description"} ], metaFields: { totalRecords: "totalRecords" } }; var LoadingDT = function(div, cols, src, opts) { LoadingDT.superclass.constructor.call( this, div, cols, src, opts); // hide the message tbody this._elMsgTbody.style.display = "none"; }; YAHOO.extend(LoadingDT, YAHOO.widget.DataTable, { showTableMessage: function(msg) { $('sales_table_overlay').clonePosition($('sales_table').down('table')). show(); }, hideTableMessage: function() { $('sales_table_overlay').hide(); } }); var table = new LoadingDT("sales_table", columns, source, { initialRequest: "startIndex=125&results=25", dynamicData: true, paginator: new YAHOO.widget.Paginator({rowsPerPage: 25, initialPage: 6}) }); table.handleDataReturnPayload = function(oRequest, oResponse, oPayload) { oPayload.totalRecords = oResponse.meta.totalRecords; return oPayload; }; })();

    Read the article

  • 550 Error When I try to get the size of a file on an FTP

    - by Eric
    I'm trying to use an FtpWebRequest to get the size of a file on a company FTP. Yet whenever I try to get the response an exception is thrown. See the error details in the catch block in the code below. string uri = "ftp://ftp.domain.com/folder/folder/file.xxx"; FtpWebRequest sizeReq = (FtpWebRequest)WebRequest.Create(uri); sizeReq.Method = WebRequestMethods.Ftp.GetFileSize; sizeReq.Credentials = cred; sizeReq.UsePassive = proj.ServerConfig.UsePassive; //true sizeReq.UseBinary = proj.ServerConfig.UseBinary; //true sizeReq.KeepAlive = proj.ServerConfig.KeepAlive; //false long size; try { //Exception thrown here when I try to get the response using (FtpWebResponse fileSizeResponse = (FtpWebResponse)sizeReq.GetResponse()) { size = fileSizeResponse.ContentLength; } } catch(WebException exp) { FtpWebResponse resp = (FtpWebResponse)exp.Response; MessageBox.Show(exp.Message); // "The remote server returned an error: (550) File unavailable (e.g., file not found, no access)." MessageBox.Show(exp.Status.ToString()); //ProtcolError MessageBox.Show(resp.StatusCode.ToString()); // ActionNotTakenFileUnavailable MessageBox.Show(resp.StatusDescription.ToString()); //"550 SIZE: Operation not permitted\r\n" } This code does work, however, when connected to my personal FTP. The StatusDescription of the response says that the operation is "not permitted". Could it be that my office FTP just wont allow for the querying of a file size? I've also tried listing the directory details, which will return the size, and have noticed that my office FTP reports the directory details in a different format then my personal FTP. Maybe this is the problem? //work ftp ListDirectoryDetails -rw-r--r-- 1 (?) user 12345 Nov 16 20:28 some file name.xxx //personal ftp ListDirectoryDetails -rw-r--r-- 1 user user 12345 Mar 13 some file name.xxx From reading this blog post I think that my personal ftp is returning a Unix formatted response, but my work is returning a windows formatted response. Maybe this is unrelated but I thought I'd mention it.

    Read the article

  • Assign a unique client ID to each <rich:dataTable /> row?

    - by Dolph Mathews
    I'm relatively new to working with the UI in Seam, so I'm hoping there is something simple I can replace the three instances of UNIQUE_ID with in the following example (such as #{object.uniqueId}). The goal is to have a <rich:dataTable /> wherein each row has the ability to show/hide a <rich:modalPanel /> with more details about the particular object instance. <rich:dataTable var="object" value="#{bean.myObject}"> <rich:column> <h:outputText value="#{object.summary}" /> </rich:column> <rich:column> <a onclick="Richfaces.showModalPanel('UNIQUE_ID');" href="#">Show Details in ModalPanel</a> <a4j:form> <rich:modalPanel id="UNIQUE_ID" > <a onclick="Richfaces.hideModalPanel('UNIQUE_ID');" href="#">Hide This ModalPanel</a> <h:outputText value="#{object.details}" /> </rich:modalPanel> </a4j:form> </rich:column> </rich:dataTable> If I only had one link/modalPanel pair, this would obviously be trivial, but I don't know what to do within the scope of the <rich:dataTable />'s iteration. Also, in case it complicates things further, the page will also contain many <rich:dataTable />'s, each implementing this behavior.

    Read the article

  • Extracting note onset from MIDI

    - by Dolphin
    Hi I need to extract musical features (note details-pitch, duration, rhythm, loudness, note start time) from a polyphonic (having 2 scores for treble and bass - bass may also have chords) MIDI file. I'm using the jMusic API to extract these details from a MIDI file. My approach is to go through each score, into parts, then phrases and finally notes and extract the details. With my approach, it's reading all the treble notes first and then the bass notes - but chords are not captured (i.e. only a single note of the chord is taken), and I cannot identify from which point onwards are the bass notes. So what I tried was to get the note onsets (i.e. the start time of note being played) - since the starting time of both the treble and bass notes at the start of the piece should be same - But I cannot extract the note onset using jMusic API. Each time it shows 0.0. Is there any way I can identify the voice (treble or bass) of a note? And also all the notes of a chord? How is the voice or note onset for each note stored in MIDI? Is this different for each MIDI file? Any insight is greatly appreciated. Thanks in advance

    Read the article

  • MSTest on x64 C++/CLI

    - by Oyvind
    I got a problem using MSTest on x64: The test project depends on a couple of C++/CLI assemblies, and fails to load for some reason. In Visual Studio, I get (stripped down): Error loading D:\xxx\Xxx.Test.dll: Unable to load the test container 'D:\xxx\Xxx.Test.dll' or one of its dependencies. Error details: System.BadImageFormatException: Could not load file or assembly 'Common.Geometry.Native, Version=1.1.4574.22395, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format. Running MSTest manually in a command prompt, I get: Unable to load the test container 'D:\xxx\Xxx.Test.dll' or one of its dependencies. Error details: System.IO.FileNotFoundException: Could not load file or assembly 'Common.Geometry.Native, Version=1.1.4574.22395, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. Details worth mentioning: The test project itself is compiled using 'Any Cpu'. I use a x64 specific testrunconfig Dependency walker shows no missing native dependencies in the C++/CLI assembly (Common.Geometry.Native) Even more interesting, there is another test project in the same solution using the same C++/CLI assembly (Common.Geometry.Native), and it runs without any problems. I have also verified that there are no 32bit assemblies/dlls interfering. Any suggestions is welcome !

    Read the article

  • How do I fetch a set of rows from data table

    - by cmrhema
    Hi, I have a dataset that has two datatables. In the first datatable I have EmpNo,EmpName and EmpAddress In the second datatable I have Empno,EmpJoindate, EmpSalary. I want a result where I should show EmpName as the label and his/her details in the gridview I populate a datalist with the first table, and have EmpNo as the datakeys. Then I populate the gridview inside the datatable which has EmpNo,EmpJoinDate and EmpAddress. My code is some what as below Datalist1.DataSource = dt; Datalist1.DataBind(); for (int i = 0; i < Datalist1.Items.Count; i++) { int EmpNo = Convert.ToInt32(Datalist1.DataKeys[i]); GridView gv = (GridView)Datalist1.FindControl("gv"); gv.DataSource = dt2; gv.DataBind(); } Now I have a problem, I have to bind the Details of the corresponding Employee to the gridview. Whereas the above code will display all the details of all employees in the gridview. If we use IEnumerable we give a condition where(a=a.eno=EmpNo), and bind that list to the gridview. How do I do this in datatable. kindly do not give me suggestions of altering the stored procedure which results the values in two tables, because that cannot be altered. I have to find a solution within the existing objects I have. Regards Hema

    Read the article

  • What is the Difference between GC.GetTotalMemory(false) and GC.GetTotalMemory(true)

    - by somaraj
    Hi, Could some one tell me the difference between GC.GetTotalMemory(false) and GC.GetTotalMemory(true); I have a small program and when i compared the results the first loop gives an put put < loop count 0 Diff = 32 for GC.GetTotalMemory(true); and < loop count 0 Diff = 0 for GC.GetTotalMemory(false); but shouldnt it be the otherway ? Smilarly rest of the loops prints some numbers ,which are different for both case. what does this number indicate .why is it changing as the loop increase. struct Address { public string Streat; } class Details { public string Name ; public Address address = new Address(); } class emp :IDisposable { public Details objb = new Details(); bool disposed = false; #region IDisposable Members public void Dispose() { Disposing(true); } void Disposing(bool disposing) { if (!disposed) disposed = disposing; objb = null; GC.SuppressFinalize(this); } #endregion } class Program { static void Main(string[] args) { long size1 = GC.GetTotalMemory(false); emp empobj = null; for (int i = 0; i < 200;i++ ) { // using (empobj = new emp()) //------- (1) { empobj = new emp(); //------- (2) empobj.objb.Name = "ssssssssssssssssss"; empobj.objb.address.Streat = "asdfasdfasdfasdf"; } long size2 = GC.GetTotalMemory(false); Console.WriteLine( "loop count " +i + " Diff = " +(size2-size1)); } } } }

    Read the article

  • Copy a multi-dimentional array by Value (not by reference) in PHP.

    - by Simon R
    Language: PHP I have a form which asks users for their educational details, course details and technical details. When the form is submitted the page goes to a different page to run processes on the information and save parts to a database. HOWEVER(!) I then need to return the page back to the original page, where having access to the original post information is needed. I thought it would be simple to copy (by value) the multi-dimensional (md) $_POST array to $_SESSION['post'] session_start(); $_SESSION['post'] = $_POST; However this only appears to place the top level of the array into $_SESSION['post'] not doing anything with the children/sub-arrays. An abbreviated form of the md $_POST array is as follows: Array ( [formid] = 2 [edu] = Array ( ['id'] = Array ( [1] = new_1 [2] = new_2 ) ['nameOfInstitution'] = Array ( [1] = 1 [2] = 2 ) ['qualification'] = Array ( [1] = blah [2] = blah ) ['grade'] = Array ( [1] = blah [2] = blah ) ) [vID] = 61 [Submit] = Save and Continue ) If I echo $_SESSION['post']['formid'] it writes "2", and if I echo $_SESSION['post']['edu'] it returns "Array". If I check that edu is an array (is_array($_SESSION['post']['edu])) it returns true. If I echo $_SESSION['post']['edu']['id'] it returns array, but when checked (is_array($_SESSION['post']['edu]['id'])) it returns false and I cannot echo out any of the elements. How do I successfully copy (by value, not by reference) the whole array (including its children) to an new array?

    Read the article

  • Developing cross platform mobile application

    - by sohilv
    More and more mobile platforms are being launched and sdk's are available to developers. There are various mobile platform are available, Android,iOS,Moblin,Windows mobile 7,RIM,symbian,bada,maemo etc. And making of corss platform application is headache for developers. I am searching common thing across the platforms which will help to developers who want to port application to all platforms.Like what is the diff screen resolution, input methods, open gl support etc. please share details that you know for the any of platform . or is there possibilities , by writing code in html (widget type of thing) and load it into native application. I know about the android , in which we can add the web view into application. by calling setContentView(view) Please share the class details where we can add the html view into native application of different type of platforms that you know. Purpose of this thread is share common details across developers. marking as community wiki. Cross platform tools & library XMLVM and iSpectrum (cross compile Java code from an Android app or creating one from scratch Phone Gap (cross platform mobile apps) Titanium (to build native mobile and desktop apps with web technologies) Mono Touch ( C# for iphone ) rhomobile - http://rhomobile.com/ samples are here: http://github.com/rhomobile/rhodes-system-api-samples Sencha Touch - Sencha Touch is a HTML5 mobile app framework that allows you to develop web apps that look and feel native on Apple iOS and Google Android touchscreen devices. http://www.sencha.com/products/touch/ Corona - Iphone/Ipad / Android application cross platform library . Too awesome. http://anscamobile.com/corona/ A guide to port existing Android app to Windows Phone 7 http://windowsphone.interoperabilitybridges.com/articles/windows-phone-7-guide-for-iphone-application-developers

    Read the article

  • magento payment process.. how it works in general

    - by spirytus
    hi everyone, got a question and I hope this is right place to ask :).. don’t quite understand how payment works in magento. client goes to checkout and lets say wants to pay as a guest, so provides address etc. and finally gets to payment methods. Then I want clients to pay thru credit card. Already have module installed for gateway (bank?) of my choice. At that point I would expect users to be redirected to 3rd party page (bank hosted) where they giving all the details, only after being returned to my magento site with appropriate message. In magento however it seems like they need to provide cc numbers and details on magento checkout page. I don’t understand if I (or the payment module I installed) need to transfer then all the credit card details to bank? I would have to have checkout page on ssl connection and static ip right? The thing is I want to avoid touching CC numbers at any point and would love to have it done by a bank page. I like the idea of magento interface all the way without redirecting to another page though, the only problem is not sure if would be able to set it all up properly. If anyone could explain to me possible options, what is the common way to do it and how the whole process works that would be very much appreciated. I did my research and looked all over google and various forums still need someones help though. Please let me know if some parts of my question are not quite clear, will try to better explain if necessary.

    Read the article

  • Help with jQuery Validation plugin and a form that uses 'panels'

    - by alex
    I have a form that works in 'sections' that I will refer to as 'panels'. By default, the form is listed out on the page, one panel after the other. However, with JavaScript, it puts the panels into one panel viewer, and displays them one after the other (with prev/next buttons). Example Form Workflow Panel 1: User Details - Panel 2: User Location - Panel 3: User Info - Panel 4: Confirm Details I am using the jQuery Validation plugin. My problem is, I have set up all the rules for all the inputs in the first 3 panels, and I'd like to be able to only validate a subset of them per panel. Example, when pushing 'next panel' after completing name & email (in the 1st, user details panel), I'd like to do a validation only on that panel first, and then get a boolean response (if the 1st panel validated), and if true, then proceed to the next panel. I've played around with a bit of the config, but unfortunately could not get it to work as I wanted. This is my first project with this plugin so I'm quite new to it! Is there a way to add rules dynamically to the plugin? i.e. not on $('form').validate(options) ? What I'd like to do, is call the validate() on the form, with all the error messages, and then on the 'next panel' code, do a switch case to determine which rules to add, and then call a validate() myself.

    Read the article

  • Converted PowerBuilder to ASP.Net browsing Errors

    - by user493325
    I had a powerbuilder application which i converted to web application in the format of ASP.Net (aspx) files. after deploying and publishing the converted web application (copy it and add ASP.Net and network Service AND IUser permissions to enable users to access it) in IIS V6.0 over Windows server 2003 and The ASP.Net version is 2.0 The error messages I get when I browse default.aspx web page are as the following:- Server Error in '/' Application. Runtime Error Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine. Details: To enable the details of this specific error message to be viewable on remote machines, please create a tag within a "web.config" configuration file located in the root directory of the current web application. This tag should then have its "mode" attribute set to "Off". <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Notes: The current error page you are seeing can be replaced by a custom error page by modifying the "defaultRedirect" attribute of the application's configuration tag to point to a custom error page URL. <!-- Web.Config Configuration File --> <configuration> <system.web> <customErrors mode="RemoteOnly" defaultRedirect="mycustompage.htm"/> </system.web> </configuration> Another error message appears on the server is:- Server Error in '/' Application. Configuration Error <roleManager enabled="true"> <membership> </roleManager> Thanks in Advance...

    Read the article

  • JQuery date picker does not firing in ajax page using Rails

    - by prabu
    Hi Here I have using datepicker from JQueryUI in my public/javascript folder as effects,prototype,control,dragdrop js files. in my public folder contains jqueryui development buddle. (css,js,development-bundle) in layout/application.rhtml <%= stylesheet_link_tag 'application' %> <%=javascript_include_tag :defaults%> <%= stylesheet_link_tag '/jquery-ui/css/custom-theme/jquery-ui-1.8.1.custom.css' %> <%=javascript_include_tag "/jquery-ui/js/jquery-1.4.2.min.js"%> <%=javascript_include_tag "/jquery-ui/js/jquery-ui-1.8.1.custom.min.js"%> <script> $(document).ready(function(){ var $j=jQuery.noConflict(); $j( '#date' ).datepicker({ dateFormat: 'dd-mm-yy' }); }); </script> in home/index.rhtml <%title "Home"%> <%=link_to "Add Details" ,:action=>"add"%> <%=link_to_remote "Ajax Add Details", :update=>"add" , :url=>{ :action=>"add" }%> <div id='add' /> in home/add.rhtml <%title "Add details"%> <%form_tag :action=>"create" do%> Name : <%=text_field_tag "name" ,"",:size=>15%> DOB : <%=text_field_tag "dob","",:id=>"date"%> <%=submit_tag "Save"%> <%end%> the datepicker works when I run home/add.rhtml directly but the datepicker not work when i run ajax page home/index.rhtml Any solutions for that,????

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >