Search Results

Search found 21220 results on 849 pages for 'oracle events'.

Page 539/849 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • At the Java DEMOgrounds - JavaFX

    - by Janice J. Heiss
    JavaFX has made rapid progress in the last year, as is evidenced by the wealth of demos on display. A few questions appear to be prominent in the minds of JavaFX enthusiasts. Here are some questions with answers provided by Oracle’s JavaFX team.When will the rest of the JavaFX code be available in open source?Oracle has started to open source JavaFX. The existing platform code will finish being committed to OpenJFX by the end of the year.Why should I use JavaFX instead of HTML5?We see JavaFX as complementary to HTML5, and most companies we talk to react positively once they understand how they can benefit from a hybrid solution. As most HTML5 developers will tell you, the biggest obstacle to deploying HTML5 applications is fragmentation. JavaFX offers a convenient way to render HTML and JavaScript within its WebView component, which provides the same level of quality and features across Windows, Mac, and Linux. Additionally, JavaScript in WebView can make calls into the Java code, and vice versa, allowing developers to tap into the best of both worlds.What is the market penetration of JavaFX? It is currently limited, as we've just made available JavaFX on Mac and Linux in August, but we expect JavaFX to be present on millions of desktop-type systems now that JavaFX is included as part of the JRE. We have also significantly lowered the level of effort required to deploy an application bundling the JRE and JavaFX runtime libraries. Finally, we are seeing a lot of interest by companies operating in the embedded market, who have found it hard to develop compelling UIs with existing technologies.Below are summaries of JavaFX Demos on display at JavaOne 2012:JavaFX EnsembleEnsemble is a collection of over 100 JavaFX samples packaged as a JavaFX application. This demo is especially useful to those new to JavaFX, or those not familiar with its latest features (e.g. canvas, color picker). Ensemble is the reference for getting familiar with JavaFX functionality. Each sample can be run from within Ensemble, and the API for each sample, as well as the source code are available alongside the sample.The samples source code can be saved as a NetBeans project for convenience purposes, or can be copied as is in any other Java IDE. The version of Ensemble shown is packaged as a native Windows application, including the JRE and JavaFX libraries. It was created with the JavaFX packager, which provides multiple packaging options, and frees developers from the cumbersome and error-prone process of packaging a Java application.FX Experience ToolsFX Experience Tools is a JavaFX application that provides different utilities to create new skins for your JavaFX applications. One of the most powerful features of JavaFX is the ability to skin applications via CSS. Since not all Java developers are familiar with CSS, these utilities are a great starting point to create custom skins. JavaFX allows developers to easily customize the look and feel of their applications through CSS. FX Experience Tools makes it easy to create new themes for JavaFX applications, even if you are not familiar with CSS. FX Experience Tools is a JavaFX application packaged as a native application including the JRE and JavaFX runtime libraries. FX Experience tools shows how this type of deployment simplifies the packaging of Java applications without requiring developers to master the intricacies of Java application packaging. The download site for FX Experience Tools is http://fxexperience.com/2012/03/announcing-fx-experience-tools/ JavaFX Scene BuilderJavaFX Scene Builder is a visual layout tool that lets users quickly design the UI of your JavaFX application, without coding. Users can drag and drop UI components, modify their properties, apply style sheets, and the FXML code they create for the layout is automatically generated in the background. The result is an FXML file that can then be combined with a Java project by binding the UI to the application’s logic. Developers can easily create user interfaces for their application, as well as separate the application’s UI from the application logic for easier maintenance. Attendees can get this app by going to javafx.com and checking the link at top of the “Overview” page.Scene Builder allows developers to easily layout JavaFX UI controls, charts, shapes, and containers, so that you can quickly prototype user interfaces. It generates FXML, an XML-based markup language that enables users to define an application’s user interface, separately from the application logic. Scene Builder can be used in combination with any Java IDE, but is more tightly integrated with NetBeans IDE. It is written as a JavaFX application, with native desktop integration on Windows and Mac OS X. It’s a perfect example of a JavaFX application packages as a native application.Scene Builder is available for your preferred development platform. Besides the GA release on Windows and Mac, a Developer Preview of Scene Builder for Linux has just been made available.Scenic ViewScenic View is a tool that can be used to understand the current state of your application UI, and to also easily manipulate properties of the scenegraph without having to keep editing your code. Creating UIs is a complex process, and it can be hard and tedious detecting these issues, editing the code, and then compiling it to test the app again. Scenic View is a great diagnostics tool that helps developers identify these issues and correct them at runtime.Attendees can get Scenic View by going to javafx.com, selecting the “Community” tab, and clicking the link under the “Third Party Tools and Utilities” section.Scenic View allows developers to easily examine the state of a JavaFX application scenegraph while the application is running. Some of the latest features added to Scenic View include event monitoring, javadoc browsing, and contextual menus. The download site for Scenic View is available here: http://fxexperience.com/scenic-view/ Conference TourConference Tour is an application that lets users discover some of the major Java conferences throughout the world. The Conference Tour application shows how simple it is to mix JavaFX and HTML5 into a single, interactive application. Attendees get Conference Tour here.JavaFX includes a Web engine based on Webkit that provides a consistent web interface to render HTML5 across operating systems, within a JavaFX application. JavaFX features a bi-directional bridge that allows Java APIs to call JavaScript within WebView, or allows JavaScript to make calls to Java APIs. This allows developers to leverage the best of both worlds.Java EE developers can take advantage of WebView and the JavaScript-Java bridge to allow their HTML clients to seamlessly bypass Web browser’s sandbox to access native system resources, providing a richer user experience.FXMediaPlayerFXMediaPlayer is an application that lets developers check different media functionality in JavaFX, such as synthesizer or support for HTTP Live Streaming (HLS). This demo shows how developers can embed video content in their Java applications. JavaFX leverages the underlying video (e.g., H.264) and audio (e.g., AAC) codecs on the user’s computer. JavaFX APIs allow developers to interact with the video content (e.g. play/pause, or programmable markers). Some of the latest media features introduced in JavaFX 2.2 include HTTP Live Streaming (HLS). Obviously there is a lot for JavaFX enthusiasts to chew on!

    Read the article

  • What's up with LDoms: Part 4 - Virtual Networking Explained

    - by Stefan Hinker
    I'm back from my summer break (and some pressing business that kept me away from this), ready to continue with Oracle VM Server for SPARC ;-) In this article, we'll have a closer look at virtual networking.  Basic connectivity as we've seen it in the first, simple example, is easy enough.  But there are numerous options for the virtual switches and virtual network ports, which we will discuss in more detail now.   In this section, we will concentrate on virtual networking - the capabilities of virtual switches and virtual network ports - only.  Other options involving hardware assignment or redundancy will be covered in separate sections later on. There are two basic components involved in virtual networking for LDoms: Virtual switches and virtual network devices.  The virtual switch should be seen just like a real ethernet switch.  It "runs" in the service domain and moves ethernet packets back and forth.  A virtual network device is plumbed in the guest domain.  It corresponds to a physical network device in the real world.  There, you'd be plugging a cable into the network port, and plug the other end of that cable into a switch.  In the virtual world, you do the same:  You create a virtual network device for your guest and connect it to a virtual switch in a service domain.  The result works just like in the physical world, the network device sends and receives ethernet packets, and the switch does all those things ethernet switches tend to do. If you look at the reference manual of Oracle VM Server for SPARC, there are numerous options for virtual switches and network devices.  Don't be confused, it's rather straight forward, really.  Let's start with the simple case, and work our way to some more sophisticated options later on.  In many cases, you'll want to have several guests that communicate with the outside world on the same ethernet segment.  In the real world, you'd connect each of these systems to the same ethernet switch.  So, let's do the same thing in the virtual world: root@sun # ldm add-vsw net-dev=nxge2 admin-vsw primary root@sun # ldm add-vnet admin-net admin-vsw mars root@sun # ldm add-vnet admin-net admin-vsw venus We've just created a virtual switch called "admin-vsw" and connected it to the physical device nxge2.  In the physical world, we'd have powered up our ethernet switch and installed a cable between it and our big enterprise datacenter switch.  We then created a virtual network interface for each one of the two guest systems "mars" and "venus" and connected both to that virtual switch.  They can now communicate with each other and with any system reachable via nxge2.  If primary were running Solaris 10, communication with the guests would not be possible.  This is different with Solaris 11, please see the Admin Guide for details.  Note that I've given both the vswitch and the vnet devices some sensible names, something I always recommend. Unless told otherwise, the LDoms Manager software will automatically assign MAC addresses to all network elements that need one.  It will also make sure that these MAC addresses are unique and reuse MAC addresses to play nice with all those friendly DHCP servers out there.  However, if we want to do this manually, we can also do that.  (One reason might be firewall rules that work on MAC addresses.)  So let's give mars a manually assigned MAC address: root@sun # ldm set-vnet mac-addr=0:14:4f:f9:c4:13 admin-net mars Within the guest, these virtual network devices have their own device driver.  In Solaris 10, they'd appear as "vnet0".  Solaris 11 would apply it's usual vanity naming scheme.  We can configure these interfaces just like any normal interface, give it an IP-address and configure sophisticated routing rules, just like on bare metal.  In many cases, using Jumbo Frames helps increase throughput performance.  By default, these interfaces will run with the standard ethernet MTU of 1500 bytes.  To change this,  it is usually sufficient to set the desired MTU for the virtual switch.  This will automatically set the same MTU for all vnet devices attached to that switch.  Let's change the MTU size of our admin-vsw from the example above: root@sun # ldm set-vsw mtu=9000 admin-vsw primary Note that that you can set the MTU to any value between 1500 and 16000.  Of course, whatever you set needs to be supported by the physical network, too. Another very common area of network configuration is VLAN tagging. This can be a little confusing - my advise here is to be very clear on what you want, and perhaps draw a little diagram the first few times.  As always, keeping a configuration simple will help avoid errors of all kind.  Nevertheless, VLAN tagging is very usefull to consolidate different networks onto one physical cable.  And as such, this concept needs to be carried over into the virtual world.  Enough of the introduction, here's a little diagram to help in explaining how VLANs work in LDoms: Let's remember that any VLANs not explicitly tagged have the default VLAN ID of 1. In this example, we have a vswitch connected to a physical network that carries untagged traffic (VLAN ID 1) as well as VLANs 11, 22, 33 and 44.  There might also be other VLANs on the wire, but the vswitch will ignore all those packets.  We also have two vnet devices, one for mars and one for venus.  Venus will see traffic from VLANs 33 and 44 only.  For VLAN 44, venus will need to configure a tagged interface "vnet44000".  For VLAN 33, the vswitch will untag all incoming traffic for venus, so that venus will see this as "normal" or untagged ethernet traffic.  This is very useful to simplify guest configuration and also allows venus to perform Jumpstart or AI installations over this network even if the Jumpstart or AI server is connected via VLAN 33.  Mars, on the other hand, has full access to untagged traffic from the outside world, and also to VLANs 11,22 and 33, but not 44.  On the command line, we'd do this like this: root@sun # ldm add-vsw net-dev=nxge2 pvid=1 vid=11,22,33,44 admin-vsw primary root@sun # ldm add-vnet admin-net pvid=1 vid=11,22,33 admin-vsw mars root@sun # ldm add-vnet admin-net pvid=33 vid=44 admin-vsw venus Finally, I'd like to point to a neat little option that will make your live easier in all those cases where configurations tend to change over the live of a guest system.  It's the "id=<somenumber>" option available for both vswitches and vnet devices.  Normally, Solaris in the guest would enumerate network devices sequentially.  However, it has ways of remembering this initial numbering.  This is good in the physical world.  In the virtual world, whenever you unbind (aka power off and disassemble) a guest system, remove and/or add network devices and bind the system again, chances are this numbering will change.  Configuration confusion will follow suit.  To avoid this, nail down the initial numbering by assigning each vnet device it's device-id explicitly: root@sun # ldm add-vnet admin-net id=1 admin-vsw venus Please consult the Admin Guide for details on this, and how to decipher these network ids from Solaris running in the guest. Thanks for reading this far.  Links for further reading are essentially only the Admin Guide and Reference Manual and can be found above.  I hope this is useful and, as always, I welcome any comments.

    Read the article

  • Complex type support in process flow &ndash; XMLTYPE

    - by shawn
        Before OWB 11.2 release, there are only 5 simple data types supported in process flow: DATE, BOOLEAN, INTEGER, FLOAT and STRING. A new complex data type – XMLTYPE is added in 11.2, in order to support complex data being passed between the process flow activities. In this article we will give a simple example to illustrate the usage of the new type and some related editors.     Suppose there is a bookstore that uses XML format orders as shown below (we use the simplest form for the illustration purpose), then we can create a process flow to handle the order, take the order as the input, then extract necessary information, and generate a confirmation email to the customer automatically. <order id=’0001’>     <customer>         <name>Tom</name>         <email>[email protected]</email>     </customer>     <book id=’Java_001’>         <quantity>3</quantity>     </book> </order>     Considering a simple user case here: we use an input parameter/variable with XMLTYPE to hold the XML content of the order; then we can use an Assign activity to retrieve the email info from the order; after that, we can create an email activity to send the email (Other activities might be added in practical case, but will not be described here). 1) Set XML content value     For testing purpose, we will create a variable to hold the sample order, and then this will be used among the process flow activities. When the variable is of XMLTYPE and the “Literal” value is set the true, the advance editor will be enabled.     Click the “Advance Editor” shown as above, a simple xml editor will popup. The editor has basic features like syntax highlight and check as shown below:     We can also do the basic validation or validation against schema with the editor by selecting the normalized schema. With this, it will be easier to provide the value for XMLTYPE variables. 2) Extract information from XML content     After setting the value, we need to extract the email information with the Assign activity. In process flow, an enhanced expression builder is used to help users construct the XPath for extracting values from XML content. When the variable’s literal value is set the false, the advance editor is enabled.     Click the button, the advance editor will popup, as shown below:     The editor is based on the expression builder (which is often used in mapping etc), an XPath lib panel is appended which provides some help information on how to write the XPath. The expression used here is: “XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/email/text()').getStringVal()”, which uses ‘/order/customer/email/text()’ as the XPath to extract the email info from the XML document.     A variable called “EMAIL_ADDR” is created with String data type to hold the value extracted.     Then we bind the “VARIABLE” parameter of Assign activity to “EMAIL_ADDR” variable, which means the value of the “EMAIL_ADDR” activity will be set to the result of the “VALUE” parameter of Assign activity. 3) Use the extracted information in Email activity     We bind the “TO_ADDRESS” parameter of the email activity to the “EMAIL_ADDR” variable created in above step.     We can also extract other information from the xml order directly through the expression, for example, we can set the “MESSAGE_BODY” with value “'Dear '||XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/name/text()').getStringVal()||chr(13)||chr(10)||'   You have ordered '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/quantity/text()').getStringVal()||' '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/@id').getStringVal()”. This expression will extract the customer name, the quantity and the book id from the order to compose the message body.     To make the email activity work, we need provide some other necessary information, Such as “SMTP_SERVER” (which is the SMTP server used to send the emails, like “mail.bookstore.com”. The default PORT number is set to 25. You need to change the value accordingly), “FROM_ADDRESS” and “SUBJECT”. Then the process flow is ready to go.     After deploying the process flow package, we can simply run the process flow to check if the result is as expected (An email will be sent to the specified email address with proper subject and message body).     Note: In oracle 11g, there is an enhanced security feature - ACL (Access Control List), which restrict the network access within db, so we need to edit the list to allow UTL_SMTP work if you are using oracle 11g. Refer to chapter “Access Control Lists for UTL_TCP/HTTP/SMTP” and “Managing Fine-Grained Access to External Network Services” for more details.       In previous releases, XMLTYPE already exists in other OWB objects, like mapping/transformation etc. When the mapping/transformation is dragged into a process flow, the parameters with XMLTYPE are mapped to STRING. Now with the XMLTYPE support in process flow, the XMLTYPE will map to XMLTYPE in a more natural way, and we can leverage the new data type for the design.

    Read the article

  • DTracing a PHPUnit Test: Looking at Functional Programming

    - by cj
    Here's a quick example of using DTrace Dynamic Tracing to work out what a PHP code base does. I was reading the article Functional Programming in PHP by Patkos Csaba and wondering how efficient this stype of programming is. I thought this would be a good time to fire up DTrace and see what is going on. Since DTrace is "always available" even in production machines (once PHP is compiled with --enable-dtrace), this was easy to do. I have Oracle Linux with the UEK3 kernel and PHP 5.5 with DTrace static probes enabled, as described in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages I installed the Functional Programming sample code and Sebastian Bergmann's PHPUnit. Although PHPUnit is included in the Functional Programming example, I found it easier to separately download and use its phar file: cd ~/Desktop wget -O master.zip https://github.com/tutsplus/functional-programming-in-php/archive/master.zip wget https://phar.phpunit.de/phpunit.phar unzip master.zip I created a DTrace D script functree.d: #pragma D option quiet self int indent; BEGIN { topfunc = $1; } php$target:::function-entry /copyinstr(arg0) == topfunc/ { self->follow = 1; } php$target:::function-entry /self->follow/ { self->indent += 2; printf("%*s %s%s%s\n", self->indent, "->", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); } php$target:::function-return /self->follow/ { printf("%*s %s%s%s\n", self->indent, "<-", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); self->indent -= 2; } php$target:::function-return /copyinstr(arg0) == topfunc/ { self->follow = 0; } This prints a PHP script function call tree starting from a given PHP function name. This name is passed as a parameter to DTrace, and assigned to the variable topfunc when the DTrace script starts. With this D script, choose a PHP function that isn't recursive, or modify the script to set self->follow = 0 only when all calls to that function have unwound. From looking at the sample FunSets.php code and its PHPUnit test driver FunSetsTest.php, I settled on one test function to trace: function testUnionContainsAllElements() { ... } I invoked DTrace to trace function calls invoked by this test with # dtrace -s ./functree.d -c 'php phpunit.phar \ /home/cjones/Desktop/functional-programming-in-php-master/FunSets/Tests/FunSetsTest.php' \ '"testUnionContainsAllElements"' The core of this command is a call to PHP to run PHPUnit on the FunSetsTest.php script. Outside that, DTrace is called and the PID of PHP is passed to the D script $target variable so the probes fire just for this invocation of PHP. Note the quoting around the PHP function name passed to DTrace. The parameter must have double quotes included so DTrace knows it is a string. The output is: PHPUnit 3.7.28 by Sebastian Bergmann. ......-> FunSetsTest::testUnionContainsAllElements -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::union <- FunSets::union -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertFalse -> PHPUnit_Framework_Assert::isFalse -> {closure} -> main <- main <- {closure} <- PHPUnit_Framework_Assert::isFalse -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertFalse <- FunSetsTest::testUnionContainsAllElements ... Time: 1.85 seconds, Memory: 3.75Mb OK (9 tests, 23 assertions) The periods correspond to the successful tests before and after (and from) the test I was tracing. You can see the function entry ("->") and return ("<-") points. Cross checking with the testUnionContainsAllElements() source code confirms the two singletonSet() calls, one union() call, two assertTrue() calls and finally an assertFalse() call. These assertions have a contains() call as a parameter, so contains() is called before the PHPUnit assertion functions are run. You can see contains() being called recursively, and how the closures are invoked. If you want to focus on the application logic and suppress the PHPUnit function trace, you could turn off tracing when assertions are being checked by adding D clauses checking the entry and exit of assertFalse() and assertTrue(). But if you want to see all of PHPUnit's code flow, you can modify the functree.d code that sets and unsets self-follow, and instead change it to toggle the variable in request-startup and request-shutdown probes: php$target:::request-startup { self->follow = 1 } php$target:::request-shutdown { self->follow = 0 } Be prepared for a large amount of output!

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • Delivering the Integrated Portal Experience!

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Guest post by Richard Maldonado, Principal Product Manager, Oracle WebCenter Portal Organizations are still struggling to standardize on a user interaction platform which can meet the needs of all their target audiences.  This has not only resulted in inefficient and inconsistent experiences for their users, but it also creates inefficiencies (productivity and costs) for the departments that manage the applications and information systems.  Portals have historically been the unifying platform that provide IT with a common interface which can securely surface the most relevant interactions for a given user and/or group of users.  However, organizations have found that the technologies available have either not provided the flexibility necessary to address all of their use cases, or they rely too much on IT resources to manage, maintain, and evolve.  Empowering  the Business Groups The core issue that IT departments face with delivering portal experiences is having enough resources to respond and address the influx of requirements which come in from the business.  Commonly, when a business group wants a new portal site established for their group, they will submit a request to the IT dept, the IT dept then assigns a resource to an administrator and/or developer to build.  Unfortunately, this approach is not scalable, it can be a time consuming activity which requires significant interaction between the business owner and the IT resource.  A modern user interaction platforms should empower the business groups by providing them tools which they can use to build and manage the portal experiences without the need for IT's involvement.  And because business groups rarely have technical resources (developers) on staff, the tools must be easy enough that virtually any business user could use.  In addition, the tool must be powerful enough to allow them to build the experience that they need, things such as creating a whole new portal, add/manage page and page hierarchy, manage user/group access, add/modify components within the page, etc.  This balance between ease-of-use and flexibility is key to the successful adoption of tools which will ultimately reduce the burden on IT, respond to the needs of the business, and deliver high-value experiences for the users.  Ready or Not, Here They Come: Smartphones and Tablets Recently, several studies have highlighted that smartphone and tablet-style devices have overtaken PC's in both sales and usage.  This shift is further driving organizations to revaluate how they're delivering data, information, and applications to their users.  Users are expecting to get the same level of access and interaction, but in a ways which are optimized for the capabilities of the device that they are using.  Expect More With the ever growing number of new IT projects and flat/shrinking budgets, organizations are looking for comprehensive solutions which can deliver integrated web experiences that are tailored for the users and optimized for mobile devices.  Piecing together a number of point solutions is no longer an option.  A modern portal technology should not only address the traditional needs of integrating and surfacing back-end applications/information, but it should enable the business through easy-to-use tools and accelerate the delivery of mobile optimized experiences.   v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter Featuring Qualcomm & Keste 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-fareast- mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • ???? ????? ????? ?????? ????? 10.2.0.4

    - by gadi.chen
    Normal 0 false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} DBA's ?????? ?????? ???? ??? ????? ??? ?????? ???? ????? ????? ??? ?????. ??? ????? ???? ????? ???? ??????? 30-Apr-2011  ???? ???? ?????? ????? ???? ??????? 10.2.0.4. ?????? ????? EBS ?? ????? ????? ????? ????? ??? ??? ???? ????? ?????? extended support, ???? ???? 11.5.10.2 ??? ???? ? 01-Dec-2011 . ) ????? ?????? ????  Minimum Baseline For Extended Support ????? ?????: 883202.1) ???? ????? ????? ?????? ?????? ?? ????? ????? ????? ????????? ???? ?? :   # ATG.RUP6 # Forms6i Patchset 19 # JRE 1.6.0_03       ???? ???? ?????? EBS ?? ????? ?????? ?????? ????? ???? ?????? ?? ,?? ??? ????? ?? ???? ??????.   ????? ???? 10.2.0.4 ?? ???? ?patches ????? ????  30-Apr-2011 . ???? ????  patches ????? ?? ????? ????? 10.2.0.5   .   ???? ????? EBS ????? 3 ?????? ?????? ?? ???: 1.      ????? ????? 11.2.0.2 - ??? ???? ????? ??????? ?????? ??? EBS ??????? 11i   ? R12 2.      ????? ????? 11.1.0.7 -  ??? ???? ????? ?????? ????? ????? 11.1 ??? ?????. 3.      ?????/????? patch 10.2.0.5 -   ???? ????? ?????? ????? ?????? ????? 10gR2 . v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}   ?????? ??????? ???? ??????:     http://blogs.oracle.com/stevenChan/2011/01/ecs_10gr2_10204.html On Database Patching and Support: A Primer for E-Business Suite Users Oracle Database 10.2 End of Premier Support -- Frequently Asked Questions (Note 1130327.1)        

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Matplotlib pick event order for overlapping artists

    - by Ajean
    I'm hitting a very strange issue with matplotlib pick events. I have two artists that are both pickable and are non-overlapping to begin with ("holes" and "pegs"). When I pick one of them, during the event handling I move the other one to where I just clicked (moving a "peg" into the "hole"). Then, without doing anything else, a pick event from the moved artist (the peg) is generated even though it wasn't there when the first event was generated. My only explanation for it is that somehow the event manager is still moving through artist layers when the event is processed, and therefore hits the second artist after it is moved under the cursor. So then my question is - how do pick events (or any events for that matter) iterate through overlapping artists on the canvas, and is there a way to control it? I think I would get my desired behavior if it moved from the top down always (rather than bottom up or randomly). I haven't been able to find sufficient enough documentation, and a lengthy search on SO has not revealed this exact issue. Below is a working example that illustrates the problem, with PathCollections from scatter as pegs and holes: import matplotlib.pyplot as plt import sys class peg_tester(): def __init__(self): self.fig = plt.figure(figsize=(3,1)) self.ax = self.fig.add_axes([0,0,1,1]) self.ax.set_xlim([-0.5,2.5]) self.ax.set_ylim([-0.25,0.25]) self.ax.text(-0.4, 0.15, 'One click on the hole, and I get 2 events not 1', fontsize=8) self.holes = self.ax.scatter([1], [0], color='black', picker=0) self.pegs = self.ax.scatter([0], [0], s=100, facecolor='#dd8800', edgecolor='black', picker=0) self.fig.canvas.mpl_connect('pick_event', self.handler) plt.show() def handler(self, event): if event.artist is self.holes: # If I get a hole event, then move a peg (to that hole) ... # but then I get a peg event also with no extra clicks! offs = self.pegs.get_offsets() offs[0,:] = [1,0] # Moves left peg to the middle self.pegs.set_offsets(offs) self.fig.canvas.draw() print 'picked a hole, moving left peg to center' elif event.artist is self.pegs: print 'picked a peg' sys.stdout.flush() # Necessary when in ipython qtconsole if __name__ == "__main__": pt = peg_tester() I have tried setting the zorder to make the pegs always above the holes, but that doesn't change how the pick events are generated, and particularly this funny phantom event.

    Read the article

  • Drupal 6: Drupal Themer gives same candidate name for different type of content types

    - by artmania
    Hi friends, I'm a drupal newbie... I have different type of contents like News, Events, etc. and their content is different. News detail page has title-content text-date. but Events detail page has title-date-content text-location-speaker-etc. So I need different layout page for these different types. So, I enabled Drupal Themer to get a candidate name. for events page, it gave me page-node.tpl.php and it gives same for News page as well :( how can I separate these pages? I expected sth like page-event-node.tpl , but no... :/ Drupal Themer also give unique candidate name for event page like page-node-18.tpl.php but it doesnt mean anything since I can not create a general layout for all events by this node name. :( Appreciate helps so much!! Thanks a lot!!!

    Read the article

  • A list of Entity Framework providers for various databases

    - by Robert Koritnik
    Which providers are there and your experience using them I would like to know about all possible native .net Framework Entity Framework providers that are out there as well as their limitations compared to the default Linq2Entities (from MS for MS SQL). If there are more for the same database even better. Tell me and I'll be updating this post with this list. Feel free to add additional providers directly into this post or provide an answer and others (including me) will add it to the list. Entity Framework 1 Microsoft SQL Server Standard/Enterprise/Express Linq 2 Entities - Microsoft SQL Server connector DataDirect ADO.NET Data Providers Microsoft SQL Server CE (Compact Edition) Any provider? MySQL MySQL Connector (since version 6.0) - I've read about issues when using Skip(), Take() and Sort() in the same expression tree - everyone welcome to input their experience/knowledge regarding this. (NOTE: MySQL Connector/NET Visual Studio Integration is not supported in the Express Editions of Visual Studio, meaning you won't be able to view MySQL databases in the Database explorer window or add a MySQL data source via Visual Studio wizard dialog boxes. Some users may find that this limits their ability to use Entity Framework and MySQL within Visual Studio Express). Devart dotConnect for MySQL - similar issues to MySql's connector as I've read and both try to blame MS for it [these issues are supposed to be solved] SQLite Devart dotConnect for SQLite System.Data.SQLite PostgreSQL Devart dotConnect for PostgreSQL Npgsql Oracle Devart dotConnect for Oracle Sample Entity Framework Provider for Oracle - community effort project DataDirect ADO.NET Data Providers DB2 IBM Data Server Provider has EF support. Here are some limitations. DataDirect ADO.NET Data Providers Sybase Sybase iAnywhere DataDirect ADO.NET Data Providers Informix IBM Data Server Provider supports Informix Firebird ADO.NET Data Provider with EF support Provider Wrappers Tracing and Caching Providers for EF Entity Framework 4 (beta) Microsoft SQL Server Microsoft's Linq to Entities 4 - shipped with .net 4.0 and Visual Studio 2010; so far the only provider for EF4 MySQL Devart dotConnect for MySQL SQLite Devart dotConnect for SQLite PostgreSQL Devart dotConnect for PostgreSQL Oracle Devart dotConnect for Oracle

    Read the article

  • What are good NoSQL and non-relational database solutions for audit/logging database

    - by Juha Syrjälä
    What would be suitable database for following? I am especially interested about your experiences with non-relational NoSQL systems. Are they any good for this kind of usage, which system you have used and would recommend, or should I go with normal relational database (DB2)? I need to gather audit trail/logging type information from bunch of sources to a centralized server where I could generate reports efficiently and examine what is happening in the system. Typically a audit/logging event would consist always of some mandatory fields, for example globally unique id (some how generated by program that generated this event) timestamp event type (i.e. user logged in, error happened etc) some information about source (server1, server2) Additionally the event could contain 0-N key-value pairs, where value might be up to few kilobytes of text. It must run on Linux server It should work with high amount of data (100GB for example) it should support some kind of efficient full text search It should allow concurrent reading and writing It should be flexible to add new event types and add/remove key-value pairs to new events. Flexible=no changes should be required to database schema, application generating the events can just add new event types/new fields as needed. it should be efficient to make queries against database. For reporting and exploring what happened. For example: How many events with type=X occurred in some time period. Get all events where field A has value Y. Get all events with type X and field A has value 1 and field B is not 2 and event occurred in last 24h

    Read the article

  • Guides for PostgreSQL query tuning?

    - by Joe
    I've found a number of resources that talk about tuning the database server, but I haven't found much on the tuning of the individual queries. For instance, in Oracle, I might try adding hints to ignore indexes or to use sort-merge vs. correlated joins, but I can't find much on tuning Postgres other than using explicit joins and recommendations when bulk loading tables. Do any such guides exist so I can focus on tuning the most run and/or underperforming queries, hopefully without adversely affecting the currently well-performing queries? I'd even be happy to find something that compared how certain types of queries performed relative to other databases, so I had a better clue of what sort of things to avoid. update: I should've mentioned, I took all of the Oracle DBA classes along with their data modeling and SQL tuning classes back in the 8i days ... so I know about 'EXPLAIN', but that's more to tell you what's going wrong with the query, not necessarily how to make it better. (eg, are 'while var=1 or var=2' and 'while var in (1,2)' considered the same when generating an execution plan? What if I'm doing it with 10 permutations? When are multi-column indexes used? Are there ways to get the planner to optimize for fastest start vs. fastest finish? What sort of 'gotchas' might I run into when moving from mySQL, Oracle or some other RDBMS?) I could write any complex query dozens if not hundreds of ways, and I'm hoping to not have to try them all and find which one works best through trial and error. I've already found that 'SELECT count(*)' won't use an index, but 'SELECT count(primary_key)' will ... maybe a 'PostgreSQL for experienced SQL users' sort of document that explained sorts of queries to avoid, and how best to re-write them, or how to get the planner to handle them better. update 2: I found a Comparison of different SQL Implementations which covers PostgreSQL, DB2, MS-SQL, mySQL, Oracle and Informix, and explains if, how, and gotchas on things you might try to do, and his references section linked to Oracle / SQL Server / DB2 / Mckoi /MySQL Database Equivalents (which is what its title suggests) and to the wikibook SQL Dialects Reference which covers whatever people contribute (includes some DB2, SQLite, mySQL, PostgreSQL, Firebird, Vituoso, Oracle, MS-SQL, Ingres, and Linter).

    Read the article

  • Maven jetty download dependencies

    - by portoalet
    Hi, Why does every time I do "mvn jetty:run", maven tries to download some dependencies (apache poi and ojdbc jars) ? How can I disable this? [INFO] Scanning for projects.. [INFO] Searching repository for plugin with prefix: 'jetty'. [INFO] ------------------------------------------------------------------------ [INFO] Building infolitReport [INFO] task-segment: [jetty:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing jetty:run Downloading: http://repository.springsource.com/maven/bundles/release/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/external/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/milestone/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/snapshot/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repo1.maven.org/maven2/org/apache/poi/com.springsource.org.apache.poi/3.6/com.springsource.org.apache.poi-3.6.pom Downloading: http://repository.springsource.com/maven/bundles/release/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/external/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/milestone/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repository.springsource.com/maven/bundles/snapshot/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom Downloading: http://repo1.maven.org/maven2/com/oracle/ojdbc14/10.2.0.2/ojdbc14-10.2.0.2.pom [INFO] [aspectj:compile {execution: default}]

    Read the article

  • "Collection was modified..." Issue

    - by Tyler Murry
    Hey guys, I've got a function that checks a list of objects to see if they've been clicked and fires the OnClick events accordingly. I believe the function is working correctly, however I'm having an issue: When I hook onto one of the OnClick events and remove and insert the element into a different position in the list (typical functionality for this program), I get the "Collection was modified..." error. I believe I understand what is going on: The function cycles through each object firing OnClick events where necessary An event is fired and the object changes places in the list per the hooked function An exception is thrown for modifying the collection while iterating through it My question is, how to do I allow the function to iterate through all the objects, fire the necessary events at the proper time and still give the user the option of manipulating the object's position in the list? Thanks, Tyler

    Read the article

  • Adobe AIR: touch screen doesn't trigger mouse down event correctly

    - by Saariko
    i have designed a gaming kiosk app in as3 i am using it on a Sony vaio l pc (like hp's touchsmarts) in windows 7 the app doesn't need any multi-touch gestures (only single touch clicks and drags) so i am using mouse events everything is fine (including mouse click and move events) except that a single touch to the screen (with no move) doesn't fire a mouse down. it is fired only after a small move of the finger outside the app, on my desktop, i see that the small windows 7 cursor jumps immediately to where a finger is placed, meaning this issue isn't a hardware or a windows problem but rather how internally the flash app receives "translated" touch-to-mouse events from the os. for example, in a windows Solitaire game, a simple touch to the screen immediately highlights the touched card. in my app, a button will change to the down state only if i touch it and also move my finger slightly (click events - down and up - are triggered fine) shouldn't the MOUSE_DOWN event trigger exactly like how a TOUCH_BEGIN would in the new touchevent class? any ideas?

    Read the article

  • jQuery sequence and function call problem

    - by Jonas
    Hi everyone, I'm very new to jquery and programming in general but I'm still trying to achieve something here. I use Fullcalendar to allow the users of my web application to insert an event in the database. The click on a day, view changes to agendaDay, then on a time of the day, and a dialog popup opens with a form. I am trying to combine validate (pre-jquery.1.4), jquery.form to post the form without page refresh The script calendar.php, included in several pages, defines the fullcalendar object and displays it in a div: $(document).ready(function() { function EventLoad() { $("#addEvent").validate({ rules: { calendar_title: "required", calendar_url: { required: false, maxlength: 100, url: true } }, messages: { calendar_title: "Title required", calendar_url: "Invalid URL format" }, success: function() { $('#addEvent').submit(function() { var options = { success: function() { $('#eventDialog').dialog('close'); $('#calendar').fullCalendar( 'refetchEvents' ); } }; // submit the form $(this).ajaxSubmit(options); // return false to prevent normal browser submit and page navigation return false; }); } }); } $('#calendar').fullCalendar({ header: { left: 'prev,next today', center: 'title', right: 'month,agendaWeek,agendaDay' }, theme: true, firstDay: 1, editable: false, events: "json-events.php?list=1&<?php echo $events_list; ?>", <?php if($_GET['page'] == 'home') echo "defaultView: 'agendaWeek',"; ?> eventClick: function(event) { if (event.url) { window.open(event.url); return false; } }, dayClick: function(date, allDay, jsEvent, view) { if (view.name == 'month') { $('#calendar').fullCalendar( 'changeView', 'agendaDay').fullCalendar( 'gotoDate', date ); }else{ if(allDay) { var timeStamp = $.fullCalendar.formatDate( date, 'dddd+dd+MMMM_u'); var $eventDialog = $('<div/>').load("json-events.php?<?php echo $events_list; ?>&new=1&all_day=1&timestamp=" + timeStamp, null, EventLoad).dialog({autoOpen:false,draggable: false, width: 675, modal:true, position:['center',202], resizable: false, title:'Add an Event'}); $eventDialog.dialog('open').attr('id','eventDialog'); } else { var timeStamp = $.fullCalendar.formatDate( date, 'dddd+dd+MMMM_u'); var $eventDialog = $('<div/>').load("json-events.php?<?php echo $events_list; ?>&new=1&all_day=0&timestamp=" + timeStamp, null, EventLoad).dialog({autoOpen:false,draggable: false, width: 675, modal:true, position:['center',202], resizable: false, title:'Add an Event'}); $eventDialog.dialog('open').attr('id','eventDialog');; } } } }); }); The script json-events.php contains the form and also the code to process the data from the submitted form. What happens when I test the whole thing: - first user click on a day, then time of day. Popup opens with time and date indicated on the form. When user submits the form, dialog closes and calendar refreshes its events.... and the event added by the user appears several times (from 4 to up to 11 times!). The form has been processed several times by the receiving php script?! - second user click, the popup opens, user submit empty form. Form is submitted (validate function not triggered) and user redirected to empty page json-events.php (ajaxForm not triggered either) Obviously, my code is wrong (and dirty as well, sorry). Why is the submitted form, submitted several time to receiving script and why is the javascript function EventLoad triggered only once ? Thank you very much for you help. This problem is killing me !

    Read the article

  • problem with sqldatasource and data binding

    - by Alexander
    I am trying to pull out data from the table I had from the database according to the id which is passed from the URL. However I always get data from id= 1? Why? FYI I took this code directly from the ClubWebsite starter kit and copy and paste it to my project to make several changes, the ClubWebsite one worked fine.. but this one doesn't and can't find any reason why because they both looked exactly the same. <%@ Page Title="" Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="Events_View.aspx.cs" Inherits="Events_View" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="splash" Runat="Server"> <div id="splash4">&nbsp;</div> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <div id="content"> <div class="post"> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ClubDatabase %>" SelectCommand="SELECT dbo.Events.id, dbo.Events.starttime, dbo.events.endtime, dbo.Events.title, dbo.Events.description, dbo.Events.staticURL, dbo.Events.address FROM dbo.Events"> <SelectParameters> <asp:Parameter Type="Int32" DefaultValue="1" Name="id"></asp:Parameter> </SelectParameters> </asp:SqlDataSource> <asp:FormView ID="FormView1" runat="server" DataSourceID="SqlDataSource1" DataKeyNames="id" AllowPaging="false" Width="100%"> <ItemTemplate> <h2> <asp:Label Text='<%# Eval("title") %>' runat="server" ID="titleLabel" /> </h2> <div> <br /> <p> <asp:Label Text='<%# Eval("address") %>' runat="server" ID="addressLabel" /> </p> <p> <asp:Label Text='<%# Eval("starttime","{0:D}") %>' runat="server" ID="itemdateLabel" /> <br /> <asp:Label Text='<%# ShowDuration(Eval("starttime"),Eval("endtime")) %>' runat="server" ID="Label1" /> </p> </div> <p> <asp:Label Text='<%# Eval("description") %>' runat="server" ID="descriptionLabel" /> </p> </ItemTemplate> </asp:FormView> <div class="dashedline"> </div> </div> </div> </asp:Content> using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.SqlClient; using System.Configuration; using System.Data; public partial class Events_View : System.Web.UI.Page { const int INVALIDID = -1; protected void Page_Load(object sender, System.EventArgs e) { SqlDataSource1.SelectParameters["id"].DefaultValue = System.Convert.ToString(EventID); } public int EventID { get { int m_EventID; object id = ViewState["EventID"]; if (id != null) { m_EventID = (int)id; } else { id = Request.QueryString["EventID"]; if (id != null) { m_EventID = System.Convert.ToInt32(id); } else { m_EventID = 1; } ViewState["EventID"] = m_EventID; } return m_EventID; } set { ViewState["EventID"] = value; } } protected void FormView1_DataBound(object sender, System.EventArgs e) { DataRowView view = (DataRowView)(FormView1.DataItem); object o = view["staticURL"]; if (o != null && o != DBNull.Value) { string staticurl = (string)o; if (staticurl != "") { Response.Redirect(staticurl); } } } protected string ShowLocationLink(object locationname, object id) { if (id != null && id != DBNull.Value) { return "At <a href='Locations_view.aspx?LocationID=" + Convert.ToString(id) + "'>" + (string)locationname + "</a><br/>"; } else { return ""; } } protected string ShowDuration(object starttime, object endtime) { DateTime starttimeDT = (DateTime)starttime; if (endtime != null && endtime != DBNull.Value) { DateTime endtimeDT = (DateTime)endtime; if (starttimeDT.Date == endtimeDT.Date) { if (starttimeDT == endtimeDT) { return starttimeDT.ToString("h:mm tt"); } else { return starttimeDT.ToString("h:mm tt") + " - " + endtimeDT.ToString("h:mm tt"); } } else { return "thru " + endtimeDT.ToString("M/d/yy"); } } else { return starttimeDT.ToString("h:mm tt"); } } }

    Read the article

  • Capture window close event

    - by -providergordienko.vladimir
    I want to capture events that close editor window (tab) in Visual Studio 2008 IDE. When I use dte2.Application.Events.get_CommandEvents(null, 0).BeforeExecute I successfully captured such events: File.Close File.CloseAllButThis File.Exit Window.CloseDocumentWindow and others. If code in window is not acceptable, I stop the event (CancelDefault = true). But if I click "X" button on the right hand side, "Save Changes"; dialog appears, tab with editor window close and I have no any captured events. In this case I can capture WindowClosing event, but can not cancel the event. Is it poosible to handle "x" button click and stop event?

    Read the article

  • Know any unobstrusive, simple GUI guidelines or design recommendations for notifications?

    - by Vinko Vrsalovic
    Hello again. I'm in the process of designing and testing various ideas for an application whose main functionality will be to notify users of occurring events and offer them with a choice of actions for each. The standard choice would be to create a queue of events showing a popup in the taskbar with the events and actions, but I want this tool to be the less intrusive and disrupting as possible. What I'm after is a good book or papers on studies of how to maximize user productivity in these intrinsically disruptive scenarios (in other words, how to achieve the perfect degree of annoying-ness, not too much, not too little). The user is supposedly interested in these events, they subscribe to them and can choose the actions to perform on each. I prefer books and papers, but the usual StackOverflow wisdom is appreciated as well. I'm after things like: Don't use popups, use instead X Show popups at most 3 seconds Show them in the left corner Use color X because it improves readability and disrupts less That is, cognitive aspects of GUI design that would help users in such a scenario.

    Read the article

  • Are government programming jobs good?

    - by Absolute0
    I am a passionate software developer and greatly enjoy programming. However I was recently contacted regarding a developer lead position for a government job at NYC for the fire department. The pay is pretty good, and I would assume the position has good job security and stability. But I am hesitant to even go for an interview as it seems like an exaggerated version of Office Space with a lot of Bureaucracy and mindless paper work. The description is as follows: The Lead Applications Developer, supporting the Programming Group, will be responsible for all phases of the system development life cycle including performing system analysis, requirements definition, database design, preparation of scopes of work, and development of project plans. Supervise programming staff and manage projects involving the design, implementation, maintenance, and enhancement of complex Oracle based user applications using Oracle Development tools. Applications will be deployed using Oracle Application Server utilizing programming languages such as JAVA, JSF, JSP, Oracle ADF, PL/SQL, and XML with J2EE and EJB technology. Anyone with previous government experience can share their two cents on this? Thank you.

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >