Search Results

Search found 17224 results on 689 pages for 'oracle cloud'.

Page 422/689 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • SQL Command Not Properly Ended (Nested Aggregation with Group-by)

    - by snowind
    I keep getting this error when I tried to execute this query, although I couldn't figure out what went wrong. I'm using Oracle and JDBC. Here's the query: SELECT Temp.flight_number, Temp.avgprice FROM (SELECT P.flight_number, AVG (P.amount) AS avgprice FROM purchase P GROUP BY P.flight_number) AS Temp WHERE Temp.avgprice = (SELECT MAX (Temp.avgprice) FROM Temp) I'm trying to get the maximum of average price of the tickets that customers have booked, group by flight_number.

    Read the article

  • Windows Azure ASP.NET MVC 2 Role with Silverlight

    - by GeekAgilistMercenary
    I was working through some scenarios recently with Azure and Silverlight.  I immediately decided a quick walk through for setting up a Silverlight Application running in an ASP.NET MVC 2 Application would be a cool project. This walk through I have Visual Studio 2010, Silverlight 4, and the Azure SDK all installed.  If you need to download any of those go get em? now. Launch Visual Studio 2010 and start a new project.  Click on the section for cloud templates as shown below. After you name the project, the dialog for what type of Windows Azure Cloud Service Role will display.  I selected ASP.NET MVC 2 Web Role, which adds the MvcWebRole1 Project to the Cloud Service Solution. Since I selected the ASP.NET MVC 2 Project type, it immediately prompts for a unit test project.  Because I just want to get everything running first, I will probably be unit testing the Silverlight and just using the MVC Project as a host for the Silverlight for now, and because I would prefer to just add the unit test project later, I am going to select no here. Once you've created the ASP.NET MVC 2 project to host the Silverlight, then create another new project.  Select the Silverlight section under the Installed Templates in the Add New Project dialog.  Then select Silverlight Application. The next dialog that comes up will inquire about using the existing ASP.NET MVC Application I just created, which I do want it to use that so I leave it checked.  The options section however I do not want to check RIA Web Services, do not want a test page added to the project, and I want Silverlight debugging enabled so I leave that checked.  Once those options are appropriately set, just click on OK and the Silverlight Project will be added to the overall solution. The next steps now are to get the Silverlight object appropriately embedded in the web page.  First open up the Site.Master file in the ASP.NET MVC 2 Project located under the Veiws/Shared/ location.  After you open the file review the content of the <header></header> section.  In that section add another <contentplaceholder></contentplaceholder> tag as shown in the code snippet below. <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> <link href="../../Content/Site.css" rel="stylesheet" type="text/css" /> <asp:ContentPlaceHolder ID="HeaderContent" runat="server" /> </head> I usually put it toward the bottom of the header section.  It just seems the <title></title> should be on the top of the section and I like to keep it that way. Now open up the Index.aspx page under the ASP.NET MVC 2 Project located in the Views/Home/ directory.  When you open up that file add a <asp:Content><asp:Content> tag as shown in the next snippet. <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Home Page </asp:Content>   <asp:Content ID=headerContent ContentPlaceHolderID=HeaderContent runat=server>   </asp:Content>   <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. </p> </asp:Content> In that center tag, I am now going to add what is needed to appropriately embed the Silverlight object into the page.  The first thing I needed is a reference to the Silverlight.js file. <script type="text/javascript" src="Silverlight.js"></script> After that comes a bit of nitty gritty Javascript.  I create another tag (and for those in the know, this is exactly like the generated code that is dumped into the *.html page generated with any Silverlight Project if you select to "add a test page that references the application".  The complete Javascript is below. function onSilverlightError(sender, args) { var appSource = ""; if (sender != null && sender != 0) { appSource = sender.getHost().Source; }   var errorType = args.ErrorType; var iErrorCode = args.ErrorCode;   if (errorType == "ImageError" || errorType == "MediaError") { return; }   var errMsg = "Unhandled Error in Silverlight Application " + appSource + "\n";   errMsg += "Code: " + iErrorCode + " \n"; errMsg += "Category: " + errorType + " \n"; errMsg += "Message: " + args.ErrorMessage + " \n";   if (errorType == "ParserError") { errMsg += "File: " + args.xamlFile + " \n"; errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } else if (errorType == "RuntimeError") { if (args.lineNumber != 0) { errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } errMsg += "MethodName: " + args.methodName + " \n"; }   throw new Error(errMsg); } I literally, since it seems to work fine, just use what is populated in the automatically generated page.  After getting the appropriate Javascript into place I put the actual Silverlight Object Embed code into the HTML itself.  Just so I know the positioning and for final verification when running the application I insert the embed code just below the Index.aspx page message.  As shown below. <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2> <%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website"> http://asp.net/mvc</a>. </p> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/CloudySilverlight.xap" /> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration: none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style: none" /> </a> </object> <iframe id="_sl_historyFrame" style="visibility: hidden; height: 0px; width: 0px; border: 0px"></iframe> </div> </asp:Content> I then open up the Silverlight Project MainPage.xaml.  Just to make it visibly obvious that the Silverlight Application is running in the page, I added a button as shown below. <UserControl x:Class="CloudySilverlight.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400">   <Grid x:Name="LayoutRoot" Background="White"> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="48,40,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> </Grid> </UserControl> Just for kicks, I added a message box that would popup, just to show executing functionality also. private void button1_Click(object sender, RoutedEventArgs e) { MessageBox.Show("It runs in the cloud!"); } I then executed the ASP.NET MVC 2 and could see the Silverlight Application in page.  With a quick click of the button, I got a message box.  Success! Now the next step is getting the ASP.NET MVC 2 Project and Silverlight published to the cloud.  As of Visual Studio 2010, Silverlight 4, and the latest Azure SDK, this is actually a ridiculously easy process. Navigate to the Azure Cloud Services web site. Once that is open go back in Visual Studio and right click on the cloud project and select publish. This will publish two files into a directory.  Copy that directory so you can easily paste it into the Azure Cloud Services web site.  You'll have to click on the application role in the cloud (I will have another blog entry soon about where, how, and best practices in the cloud). In the text boxes shown, select the application package file and the configuration file and place them in the appropriate text boxes.  This is the part were it comes in handy to have copied the directory path of the file location.  That way when you click on browser you can just paste that in, then hit enter.  The two files will be listed and you can select the appropriate file. Once that is done, name the service deployment.  Then click on publish.  After a minute or so you will see the following screen. Now click on run.  Once the MvcWebRole1 goes green (the little light symbol to the left of the status) click on the Web Site URL.  Be patient during this process too, it could take a minute or two.  The Silverlight application should again come up just like you ran it on your local machine. Once staging is up and running, click on the circular icon with two arrows to move staging to production.  Once you are done make sure the green light is again go for the production deploy, then click on the Web Site URL to verify the site is working.  At this point I had a successful development, staging, and production deployment. Thanks for reading, hope this was helpful.  I have more Windows Azure and other cloud related material coming, so stay tuned. Original Entry

    Read the article

  • Migrating from tomcat to tc server - receiving java.sql.SQLException on startup

    - by user470184
    I'm receiving below error when I start tcServer. I do not receive this error on standalone version of tomcat. Is there extra config I need to add for tcServer ? WARNING: Unexpected exception resolving reference java.sql.SQLException: Io exception: The Network Adapter could not establish the connection at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:441) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:165) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801) at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:277) at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:182) at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:699) at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:631) at org.apache.tomcat.jdbc.pool.ConnectionPool.init(ConnectionPool.java:485) at org.apache.tomcat.jdbc.pool.ConnectionPool.(ConnectionPool.java:143) at org.apache.tomcat.jdbc.pool.DataSourceProxy.pCreatePool(DataSourceProxy.java:116) at org.apache.tomcat.jdbc.pool.DataSourceProxy.createPool(DataSourceProxy.java:103) at org.apache.tomcat.jdbc.pool.DataSourceFactory.createDataSource(DataSourceFactory.java:539) at org.apache.tomcat.jdbc.pool.DataSourceFactory.getObjectInstance(DataSourceFactory.java:237) at org.apache.naming.factory.ResourceFactory.getObjectInstance(ResourceFactory.java:140) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at org.apache.naming.NamingContext.lookup(NamingContext.java:793) at org.apache.naming.NamingContext.lookup(NamingContext.java:140) at org.apache.naming.NamingContext.lookup(NamingContext.java:781) at org.apache.naming.NamingContext.lookup(NamingContext.java:153) at org.apache.catalina.core.NamingContextListener.addResource(NamingContextListener.java:1028) at org.apache.catalina.core.NamingContextListener.createNamingContext(NamingContextListener.java:637) at org.apache.catalina.core.NamingContextListener.lifecycleEvent(NamingContextListener.java:238) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142) at org.apache.catalina.core.StandardServer.start(StandardServer.java:747) at org.apache.catalina.startup.Catalina.start(Catalina.java:595) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

    Read the article

  • Web Self Service installation on Windows

    - by Rajesh Sharma
    Web Self Service (WSS) installation on windows is pretty straight forward but you might face some issues if deployed under tomcat. Here's a step-by-step guide to install Oracle Utilities Web Self Service on windows.   Below installation steps are done on: Oracle Utilities Framework version 2.2.0 Oracle Utilities Application - Customer Care & Billing version 2.2.0 Application server - Apache Tomcat 6.0.13 on default port 6500 Other settings include: SPLBASE = C:\spl\CCBDEMO22 SPLENVIRON = CCBV22 SPLWAS = TCAT   Follow these steps for a Web Self Service installation on windows: Download Web Self Service application from edelivery.   Copy the delivery file Release-SelfService-V2.2.0.zip from the Oracle Utilities Customer Care and Billing version 2.2.0 Web Self Service folder on the installation media to a directory on your Windows box where you would like to install the application, in our case it's a temporary folder C:\wss_temp.   Setup application environment, execute splenviron.cmd -e <ENVIRON_NAME>   Create base folder for Self Service application named SelfService under %SPLEBASE%\splapp\applications   Install Oracle Utilities Web Self Service   C:\wss_temp\Release-SelfService-V2.2.0>install.cmd -d %SPLEBASE%\splapp\applications\SelfService   Web Self Service installation menu. Populate environment values for each item.   ******************************************************** Pick your installation options: ******************************************************** 1. Destination directory name for installation.             | C:\spl\CCBDEMO22\splapp\applications\SelfService 2. Web Server Host.                                         | CCBV22 3. Web Server Port Number.                                  | 6500 4. Mail SMTP Host.                                          | CCBV22 5. Top Product Installation directory.                      | C:\spl\CCBDEMO22 6.     Web Application Server Type.                         | TCAT 7.     When OAS: SPLWeb OC4J instance name is required.     | OC4J1 8.     When WAS: SPLWeb server instance name is required.   | server1   P. Process the installation. Each item in the above list should be configured for a successful installation. Choose option to configure or (P) to process the installation:  P   Option 7 and Option 8 can be ignored for TCAT.   Above step installs SelfService.war file in the destination directory. We need to explode this war file. Change directory to the installation destination folder, and   C:\spl\CCBDEMO22\splapp\applications\SelfService>jar -xf SelfService.war   Review SelfServiceConfig.properties and CMSelfServiceConfig.properties. Change any properties value within the file specific to your installation/site. Generally default settings apply, for this exercise assumes that WEB user already exists in your application database.   For more information on property file customization, refer to Oracle Utilities Web Self Service Configuration section in Customer Care & Billing Installation Guide.   Add context entry in server.xml located under tomcat-base folder C:\spl\CCBDEMO22\product\tomcatBase\conf   ... <!-- SPL Context -->           <Context path="" docBase="C:/spl/CCBDEMO22/splapp/applications/root" debug="0" privileged="true"/>           <Context path="/appViewer" docBase="C:/spl/CCBDEMO22/splapp/applications/appViewer" debug="0" privileged="true"/>           <Context path="/help" docBase="C:/spl/CCBDEMO22/splapp/applications/help" debug="0" privileged="true"/>           <Context path="/XAIApp" docBase="C:/spl/CCBDEMO22/splapp/applications/XAIApp" debug="0" privileged="true"/>           <Context path="/SelfService" docBase="C:/spl/CCBDEMO22/splapp/applications/SelfService" debug="0" privileged="true"/> ...   Add User in tomcat-users.xml file located under tomcat-base folder C:\spl\CCBDEMO22\product\tomcatBase\conf   <user username="WEB" password="selfservice" roles="cisusers"/>   Note the password is "selfservice", this is the default password set within the SelfServiceConfig.properties file with base64 encoding.   Restart the application (spl.cmd stop | start)   12.  Although Apache Tomcat version 6.0.13 does not come with the admin pack, you can verify whether SelfService application is loaded and running, go to following URL http://server:port/manager/list, in our case it'll be http://ccbv22:6500/manager/list Following output will be displayed   OK - Listed applications for virtual host localhost /admin:running:0:C:/tomcat/apache-tomcat-6.0.13/webapps/ROOT/admin /XAIApp:running:0:C:/spl/CCBDEMO22/splapp/applications/XAIApp /host-manager:running:0:C:/tomcat/apache-tomcat-6.0.13/webapps/host-manager /SelfService:running:0:C:/spl/CCBDEMO22/splapp/applications/SelfService /appViewer:running:0:C:/spl/CCBDEMO22/splapp/applications/appViewer /manager:running:1:C:/tomcat/apache-tomcat-6.0.13/webapps/manager /help:running:0:C:/spl/CCBDEMO22/splapp/applications/help /:running:0:C:/spl/CCBDEMO22/splapp/applications/root   Also ensure that the XAIApp is running.   Run Oracle Utilities Web Self Service application http://server:port/SelfService in our case it'll be  http://ccbv22:6500/SelfService   Still doesn't work? And you get '503 HTTP response' at the time of customer registration?     This is because XAI service is still unavailable. There is initialize.waittime set for a default value of 90 seconds for the XAI Application to come up.   Remember WSS uses XAI to perform actions/validations on the CC&B database.  

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • Impatient Customers Make Flawless Service Mission Critical for Midsize Companies

    - by Richard Lefebvre
    At times, I can be an impatient customer. But I’m not alone. Research by The Social Habit shows that among customers who contact a brand, product, or company through social media for support, 32% expect a response within 30 minutes and 42% expect a response within 60 minutes! 70% of respondents to another study expected their complaints to be addressed within 24 hours, irrespective of how they contacted the company. I was intrigued when I read a recent blog post by David Vap, Group Vice President of Product Development for Oracle Service Cloud. It’s about “Three Secrets to Innovation” in customer service. In David’s words: 1) Focus on making what’s hard simple 2) Solve real problems for real people 3) Don’t just spin a good vision. Do something about it  I believe midsize companies have a leg up in delivering on these three points, mainly because they have no other choice. How can you grow a business without listening to your customers and providing flawless service? Big companies are often weighed down by customer service practices that have been churning in bureaucracy for years or even decades. When the all-in-one printer/fax/scanner I bought my wife for Christmas (call me a romantic) failed after sixty days, I wasted hours of my time navigating the big brand manufacturer’s complex support and contact policies only to be offered a refurbished replacement after I shipped mine back to them. There was not a happy ending. Let's just say my wife still doesn't have a printer.  Young midsize companies need to innovate to grow. Established midsize company brands need to innovate to survive and reach the next level. Midsize Customer Case Study: The Boston Globe The Boston Globe, established in 1872 and the winner of 22 Pulitzer Prizes, is fighting the prevailing decline in the newspaper industry. Businessman John Henry invested in the Globe in 2013 because he, “…believes deeply in the future of this great community, and the Globe should play a vital role in determining that future”. How well the paper executes on its bold new strategy is truly mission critical—a matter of life or death for an industry icon. This customer case study tells how Oracle’s Service Cloud is helping The Boston Globe “do something about” and not just “spin” it’s strategy and vision via improved customer service. For example, Oracle RightNow Chat Cloud Service is now the preferred support channel for its online environments. The average e-mail or phone call can take three to four minutes to complete while the average chat is only 30 to 40 seconds. It’s a great example of one company leveraging technology to make things simpler to solve real problems for real people. Related: Oracle Cloud Service a leader in The Forrester Wave™: Customer Service Solutions For Small And Midsize Teams, Q2 2014

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups

    - by The Old Toxophilist
    By default running your Exalogic in a Virtual provides you with, what to Cloud Users, is a single large resource and they can just create vServers and not care about how they are laid down on the the underlying infrastructure. All the Cloud Users will know is that they can create vServers. For example if we have a Quarter Rack (8 Nodes) and our Cloud User creates 8 vServers those 8 vServers may run on 8 distinct nodes or may all run on the same node. Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers there are cases where it is extremely important that vServers run on distinct physical compute nodes. For example if we have a Weblogic Cluster we will want the Servers with in the cluster to run on distinct physical node to cover for the situation where one physical node is lost. To achieve this the Exalogic Virtualised implementation provides Distribution Groups that define and anti-aliasing policy that the underlying Virtualisation Algorithm will take into account when placing vServers. It should be noted that Distribution Groups must be created before you create vServers because a vServer can only be added to a Distribution Group at creation time. Creating A Distribution Group To create a Distribution Groups we will first need to select the Account in which we want the Distribution Group to be created. Once we have selected the account we will see the Interface update and Account specific Actions will be displayed within the Action Panes. From the Action pane (or Right-Click on the Account) select the "Create Distribution Group" action. This will initiate the create wizard as follows. Distribution Group Details Within the first Step of the Wizard we can specify the name of the distribution group and this should be unique. In addition we can provide a detailed description of the group. Distribution Group Configuration The second step of the configuration wizard allows you to specify the number of elements that are required within this group and will specify a maximum of the number of nodes within you Exalogic. At this point it is always better to specify a group with spare capacity allowing for future expansion. As vServers are added to group the available slots decrease. Summary Finally the last step of the wizard display a summary of the information entered.

    Read the article

  • links for 2010-06-05

    - by Bob Rhubart
    Paul Levine: New IT Optimization and Consolidation Strategies Alan Levine, Senior Director, Enterprise Architecture at Oracle explores a practical approach to building your optimization roadmap. Topics covered include rationalization, virtualization, data center consolidation, cloud computing, and the latest in database machines. (tags: oracle entarch virtualization cloud rationalization)

    Read the article

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • Looking to apply Bundle Patch 1 on Enterprise Manager 12c ? Here is a workbook to help you ....

    - by Pankaj
    Are you planning to apply Bundle patch 1 for EM 12c ?  If yes , check this workbook which describes the complete flow . Enterprise Manager Cloud Control Workbook for Applying Bundle Patch 1 (February 2012) and 12.1.0.2 Plugins [ID 1393173.1] Applies to:Enterprise Manager Base Platform - Version: 12.1.0.1.0 to 12.1.0.1.0 - Release: 12.1 to 12.1 PurposeThis document provides an overview of the installation steps needed to apply Bundle Patch 1 on the EM Cloud Control 12c Oracle Management Service OMS) and Management Agent.

    Read the article

  • How can I test if a point lies within a 3d shape with its surface defined by a point cloud?

    - by Ben
    Hi I have a collection of points which describe the surface of a shape that should be roughly spherical, and I need a method with which to determine if any other given point lies within this shape. I've previously been approximating the shape as an exact sphere, but this has proven too inaccurate and I need a more accurate method. Simplicity and speed is favourable over complete accuracy, a good approximation will suffice. I've come across techniques for converting a point cloud to a 3d mesh, but most things I have found have been very complicated, and I am looking for something as simple as possible. Any ideas? Many thanks, Ben.

    Read the article

  • Transportable Database 11gR2 Certified with E-Business Suite

    - by Steven Chan
    Platform migration is the process of moving a database from one operating system platform to a different operating system platform. You might wish to migrate your E-Business Suite database to create testing instances, experiment with new architectures, perform benchmarks, or prepare for actual platform changes in your production environment. Database migration across platforms of the same "endian" format (byte ordering) using the Transportable Database (TDB) process is now certified with Oracle Database 11gR2 (11.2.0.1) for:Oracle E-Business Suite Releases 11i (11.5.10.2) Oracle E-Business Suite Release 12.0.4 or higherOracle E-Business Suite Release 12.1.1 or higherThis EBS database migration process was previously certified only for 10gR2 and 11gR1.

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Learning to Grow

    - by jack.flynn
    A Conversation with Ted Simpson of HEUG A great place to revisit Oracle OpenWorld year round is OracleWebVideo on YouTube. Oracle Magazine Senior Editor Jeff Erickson sat down with Ted Simpson at last year's Oracle OpenWorld to find out how the Higher Education Users Group (HEUG) is helping hundreds of member institutions and thousands of individuals across the globe meet the technological challenges in colleges and universities. Simpson joined HEUG back when it was a PeopleSoft special interest group. Now that higher education institutions have expanded into IT infrastructures the size of global corporations or small municipalities, his user group has also been challenged by growth.

    Read the article

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Print Any Document Type with AutoVue Document Print Services

    - by [email protected]
    The newly released AutoVue Document Print Services allow development organizations to automate and process high volume printing operations, of both business and technical document types, within their broader enterprise applications. For many organizations, their printing processes are challenged by the fact that they can only print a small subset of the documents required by their enterprise users. By integrating AutoVue Document Print Services, and deploying them in conjunction with their existing print server solutions, organizations can address that challenge and automate the printing of virtually any document type required in any business process, greatly extending the value of their print server solutions, and improving business processes and workforce productivity. For further details, check out the AutoVue Document Print Services datasheet.

    Read the article

  • JPA - insert and retrieve clob and blob types

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes about the JPA feature for handling clob and blob data types.You will learn the following in this article. @Lob annotation Client code to insert and retrieve the clob/blob types End to End ADFaces application to retrieve the image from database table and display it in web page. Use Case Description Persisting and reading the image from database using JPA clob/blob type. @Lob annotation By default, TopLink JPA assumes that all persistent data can be represented as typical database data types. Use the @Lob annotation with a basic mapping to specify that a persistent property or field should be persisted as a large object to a database-supported large object type. A Lob may be either a binary or character type. TopLink JPA infers the Lob type from the type of the persistent field or property. For string and character-based types, the default is Clob. In all other cases, the default is Blob. Example Below code shows how to use this annotation to specify that persistent field picture should be persisted as a Blob. public class Person implements Serializable {    @Id    @Column(nullable = false, length = 20)    private String name;    @Column(nullable = false)    @Lob    private byte[] picture;    @Column(nullable = false, length = 20) } Client code to insert and retrieve the clob/blob types Reading a image file and inserting to Database table Below client code will read the image from a file and persist to Person table in database.                       Person p=new Person();                      p.setName("Tom");                      p.setSex("male");                      p.setPicture(writtingImage("Image location"));// - c:\images\test.jpg                       sessionEJB.persistPerson(p); //Retrieving the image from Database table and writing to a file                       List<Person> plist=sessionEJB.getPersonFindAll();//                      Person person=(Person)plist.get(0);//get a person object                      retrieveImage(person.getPicture());   //get picture retrieved from Table //Private method to create byte[] from image file  private static byte[] writtingImage(String fileLocation) {      System.out.println("file lication is"+fileLocation);     IOManager manager=new IOManager();        try {           return manager.getBytesFromFile(fileLocation);                    } catch (IOException e) {        }        return null;    } //Private method to read byte[] from database and write to a image file    private static void retrieveImage(byte[] b) {    IOManager manager=new IOManager();        try {            manager.putBytesInFile("c:\\webtest.jpg",b);        } catch (IOException e) {        }    } End to End ADFaces application to retrieve the image from database table and display it in web page. Please find the application in this link. Following are the j2ee components used in the sample application. ADFFaces(jspx page) HttpServlet Class - Will make a call to EJB and retrieve the person object from person table.Read the byte[] and write to response using Outputstream. SessionEJBBean - This is a session facade to make a local call to JPA entities JPA Entity(Person.java) - Person java class with setter and getter method annotated with @Lob representing the clob/blob types for picture field.

    Read the article

  • Upload Certificate and Key to RUEI in order to decrypt SSL traffic

    - by stefan.thieme(at)oracle.com
    So you want to monitor encrypted traffic with your RUEI collector ?Actually this is an easy thing if you follow the lines below...I will start out with creating a pair of snakeoil (so called self-signed) certificate and key with the make-ssl-cert tool which comes pre-packaged with apache only for the purpose of this example.$ sudo make-ssl-cert generate-default-snakeoil$ sudo ls -l /etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key-rw-r--r-- 1 root root     615 2010-06-07 10:03 /etc/ssl/certs/ssl-cert-snakeoil.pem-rw-r----- 1 root ssl-cert 891 2010-06-07 10:03 /etc/ssl/private/ssl-cert-snakeoil.keyRUEI Configuration of Security SSL Keys You will most likely get these two files from your Certificate Authority (CA) and/or your system administrators should be able to extract this from your WebServer or LoadBalancer handling SSL encryption for your infrastructure.Now let's look at the content of these two files, the certificate (apache assumes this is in PEM format) is called a public key and the private key is used by the apache server to encrypt traffic for a client using the certificate to initiate the SSL connection with the server.In case you already know that these two match, you simply have to paste them in one text file and upload this text file to your RUEI instance.$ sudo cat /etc/ssl/certs/ssl-cert-snakeoil.pem /etc/ssl/private/ssl-cert-snakeoil.key > /tmp/ruei.cert_and_key$ sudo cat /tmp/ruei.cert_and_key -----BEGIN CERTIFICATE----- MIIBmTCCAQICCQD7O3XXwVilWzANBgkqhkiG9w0BAQUFADARMQ8wDQYDVQQDEwZ1 YnVudHUwHhcNMTAwNjA3MDgwMzUzWhcNMjAwNjA0MDgwMzUzWjARMQ8wDQYDVQQD EwZ1YnVudHUwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBALbs+JnI+p+K7Iqa SQZdnYBxOpdRH0/9jt1QKvmH68v81h9+f1Z2rVR7Zrd/l+ruE3H9VvuzxMlKuMH7 qBX/gmjDZTlj9WJM+zc0tSk+e2udy9he20lGzTxv0vaykJkuKcvSWNk4WE9NuAdg IHZvjKgoTSVmvM1ApMCg69nyOy97AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAk2rv VEkxR1qPSpJiudDuGUHtWKBKWiWbmSwI3REZT+0vG+YDG5a55NdxgRk3zhQntqF7 gNYjKxblBByBpY7W0ci00kf7kFgvXWMeU96NSQJdnid/YxzQYn0dGL2rSh1dwdPN NPQlNSfnEQ1yxFevR7aRdCqTbTXU3mxi8YaSscE= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXgIBAAKBgQC27PiZyPqfiuyKmkkGXZ2AcTqXUR9P/Y7dUCr5h+vL/NYffn9W dq1Ue2a3f5fq7hNx/Vb7s8TJSrjB+6gV/4Jow2U5Y/ViTPs3NLUpPntrncvYXttJ Rs08b9L2spCZLinL0ljZOFhPTbgHYCB2b4yoKE0lZrzNQKTAoOvZ8jsvewIDAQAB AoGBAJ7LCWeeUwnKNFqBYmD3RTFpmX4furnal3lBDX0945BZtJr0WZ/6N679zIYA aiVTdGfgjvDC9lHy3n3uctRd0Jqdh2QoSSxNBhq5elIApNIIYzu7w/XI/VhGcDlA b6uadURQEC2q+M8YYjw3mwR2omhCWlHIViOHe/9T8jfP/8pxAkEA7k39WRcQildH DFKcj7gurqlkElHysacMTFWf0ZDTEUS6bdkmNXwK6mH63BlmGLrYAP5AMgKgeDf8 D+WRfv8YKQJBAMSCQ7UGDN3ysyfIIrdc1RBEAk4BOrKHKtD5Ux0z5lcQkaCYrK8J DuSldreN2yOhS99/S4CRWmGkTj04wRSnjwMCQQCaR5mW3QzTU4/m1XEQxsBKSdZE 2hMSmsCmhuSyK13Kl0FPLr/C7qyuc4KSjksABa8kbXaoKfUz/6LLs+ePXZ2JAkAv +mIPk5+WnQgS4XFgdYDrzL8HTpOHPSs+BHG/goltnnT/0ebvgXWqa5+1pyPm6h29 PrYveM2pY1Va6z1xDowDAkEAttfzAwAHz+FUhWQCmOBpvBuW/KhYWKZTMpvxFMSY YD5PH6NNyLfBx0J4nGPN5n/f6il0s9pzt3ko++/eUtWSnQ== -----END RSA PRIVATE KEY----- Simply click on the add new key and browse for the cert_and_key file on your desktop which you concatenated earlier using any text editor. You may need to add a passphrase in order to decrypt the RSA key in some cases (it should tell you BEGIN ENCRYPTED PRIVATE KEY in the header line). I will show you the success screen after uploading the certificate to RUEI. You may want to restart your collector once you have uploaded all the certificate/key pairs you want to use in order to make sure they get picked up asap.You should be able to see the number of SSL Connections rising in the Collector statistics screen below. The figures for decrypt errors should slowly go down and the usage figures for your encryption algortihm on the subsequent SSL Encryption screen should go up. You should be 100% sure everything works fine by now, otherwise see below to distinguish the remaining 1% from your 99% certainty.Verify Certificate and Key are matchingYou can compare the modulus of private key and public certificate and they should match in order for the key to fit the lock. You only want to make sure they both fit each other.We are actually interested only in the following details of the two files, which can be determined by using the -subject, -dates and -modulus command line switches instead of the complete -text output of the x509 certificate/rsa key contents.$ sudo openssl x509 -noout -subject -in /etc/ssl/certs/ssl-cert-snakeoil.pemsubject= /CN=ubuntu$ sudo openssl x509 -noout -dates -in /etc/ssl/certs/ssl-cert-snakeoil.pemnotBefore=Jun  7 08:03:53 2010 GMTnotAfter=Jun  4 08:03:53 2020 GMT$ sudo openssl x509 -noout -modulus -in /etc/ssl/certs/ssl-cert-snakeoil.pem Modulus=B6ECF899C8FA9F8AEC8A9A49065D9D80713A97511F4FFD8EDD502AF987EBCBFCD61F7E7F5676AD547B66B77F97EAEE1371FD56FBB3C4C94AB8C1FBA815FF8268C3653963F5624CFB3734B5293E7B6B9DCBD85EDB4946CD3C6FD2F6B290992E29CBD258D938584F4DB8076020766F8CA8284D2566BCCD40A4C0A0EBD9F23B2F7B $ sudo openssl rsa -noout -modulus -in /etc/ssl/private/ssl-cert-snakeoil.keyModulus=B6ECF899C8FA9F8AEC8A9A49065D9D80713A97511F4FFD8EDD502AF987EBCBFCD61F7E7F5676AD547B66B77F97EAEE1371FD56FBB3C4C94AB8C1FBA815FF8268C3653963F5624CFB3734B5293E7B6B9DCBD85EDB4946CD3C6FD2F6B290992E29CBD258D938584F4DB8076020766F8CA8284D2566BCCD40A4C0A0EBD9F23B2F7BAs you can see the modulus matches exactly and we have the proof that the certificate has been created using the private key. OpenSSL Certificate and Key DetailsAs I already told you, you do not need all the greedy details, but in case you want to know it in depth what is actually in those hex-blocks can be made visible with the following commands which show you the actual content in a human readable format.Note: You may not want to post all the details of your private key =^) I told you I have been using a self-signed certificate only for showing you these details.$ sudo openssl rsa -noout -text -in /etc/ssl/private/ssl-cert-snakeoil.keyPrivate-Key: (1024 bit)modulus:    00:b6:ec:f8:99:c8:fa:9f:8a:ec:8a:9a:49:06:5d:    9d:80:71:3a:97:51:1f:4f:fd:8e:dd:50:2a:f9:87:    eb:cb:fc:d6:1f:7e:7f:56:76:ad:54:7b:66:b7:7f:    97:ea:ee:13:71:fd:56:fb:b3:c4:c9:4a:b8:c1:fb:    a8:15:ff:82:68:c3:65:39:63:f5:62:4c:fb:37:34:    b5:29:3e:7b:6b:9d:cb:d8:5e:db:49:46:cd:3c:6f:    d2:f6:b2:90:99:2e:29:cb:d2:58:d9:38:58:4f:4d:    b8:07:60:20:76:6f:8c:a8:28:4d:25:66:bc:cd:40:    a4:c0:a0:eb:d9:f2:3b:2f:7bpublicExponent: 65537 (0x10001)privateExponent:    00:9e:cb:09:67:9e:53:09:ca:34:5a:81:62:60:f7:    45:31:69:99:7e:1f:ba:b9:da:97:79:41:0d:7d:3d:    e3:90:59:b4:9a:f4:59:9f:fa:37:ae:fd:cc:86:00:    6a:25:53:74:67:e0:8e:f0:c2:f6:51:f2:de:7d:ee:    72:d4:5d:d0:9a:9d:87:64:28:49:2c:4d:06:1a:b9:    7a:52:00:a4:d2:08:63:3b:bb:c3:f5:c8:fd:58:46:    70:39:40:6f:ab:9a:75:44:50:10:2d:aa:f8:cf:18:    62:3c:37:9b:04:76:a2:68:42:5a:51:c8:56:23:87:    7b:ff:53:f2:37:cf:ff:ca:71prime1:    00:ee:4d:fd:59:17:10:8a:57:47:0c:52:9c:8f:b8:    2e:ae:a9:64:12:51:f2:b1:a7:0c:4c:55:9f:d1:90:    d3:11:44:ba:6d:d9:26:35:7c:0a:ea:61:fa:dc:19:    66:18:ba:d8:00:fe:40:32:02:a0:78:37:fc:0f:e5:    91:7e:ff:18:29prime2:    00:c4:82:43:b5:06:0c:dd:f2:b3:27:c8:22:b7:5c:    d5:10:44:02:4e:01:3a:b2:87:2a:d0:f9:53:1d:33:    e6:57:10:91:a0:98:ac:af:09:0e:e4:a5:76:b7:8d:    db:23:a1:4b:df:7f:4b:80:91:5a:61:a4:4e:3d:38:    c1:14:a7:8f:03exponent1:    00:9a:47:99:96:dd:0c:d3:53:8f:e6:d5:71:10:c6:    c0:4a:49:d6:44:da:13:12:9a:c0:a6:86:e4:b2:2b:    5d:ca:97:41:4f:2e:bf:c2:ee:ac:ae:73:82:92:8e:    4b:00:05:af:24:6d:76:a8:29:f5:33:ff:a2:cb:b3:    e7:8f:5d:9d:89exponent2:    2f:fa:62:0f:93:9f:96:9d:08:12:e1:71:60:75:80:    eb:cc:bf:07:4e:93:87:3d:2b:3e:04:71:bf:82:89:    6d:9e:74:ff:d1:e6:ef:81:75:aa:6b:9f:b5:a7:23:    e6:ea:1d:bd:3e:b6:2f:78:cd:a9:63:55:5a:eb:3d:    71:0e:8c:03coefficient:    00:b6:d7:f3:03:00:07:cf:e1:54:85:64:02:98:e0:    69:bc:1b:96:fc:a8:58:58:a6:53:32:9b:f1:14:c4:    98:60:3e:4f:1f:a3:4d:c8:b7:c1:c7:42:78:9c:63:    cd:e6:7f:df:ea:29:74:b3:da:73:b7:79:28:fb:ef:    de:52:d5:92:9d$ sudo openssl x509 -noout -text -in /etc/ssl/certs/ssl-cert-snakeoil.pemCertificate:    Data:        Version: 1 (0x0)        Serial Number:            fb:3b:75:d7:c1:58:a5:5b        Signature Algorithm: sha1WithRSAEncryption        Issuer: CN=ubuntu        Validity            Not Before: Jun  7 08:03:53 2010 GMT            Not After : Jun  4 08:03:53 2020 GMT        Subject: CN=ubuntu        Subject Public Key Info:            Public Key Algorithm: rsaEncryption            RSA Public Key: (1024 bit)                Modulus (1024 bit):                    00:b6:ec:f8:99:c8:fa:9f:8a:ec:8a:9a:49:06:5d:                    9d:80:71:3a:97:51:1f:4f:fd:8e:dd:50:2a:f9:87:                    eb:cb:fc:d6:1f:7e:7f:56:76:ad:54:7b:66:b7:7f:                    97:ea:ee:13:71:fd:56:fb:b3:c4:c9:4a:b8:c1:fb:                    a8:15:ff:82:68:c3:65:39:63:f5:62:4c:fb:37:34:                    b5:29:3e:7b:6b:9d:cb:d8:5e:db:49:46:cd:3c:6f:                    d2:f6:b2:90:99:2e:29:cb:d2:58:d9:38:58:4f:4d:                    b8:07:60:20:76:6f:8c:a8:28:4d:25:66:bc:cd:40:                    a4:c0:a0:eb:d9:f2:3b:2f:7b                Exponent: 65537 (0x10001)    Signature Algorithm: sha1WithRSAEncryption        93:6a:ef:54:49:31:47:5a:8f:4a:92:62:b9:d0:ee:19:41:ed:        58:a0:4a:5a:25:9b:99:2c:08:dd:11:19:4f:ed:2f:1b:e6:03:        1b:96:b9:e4:d7:71:81:19:37:ce:14:27:b6:a1:7b:80:d6:23:        2b:16:e5:04:1c:81:a5:8e:d6:d1:c8:b4:d2:47:fb:90:58:2f:        5d:63:1e:53:de:8d:49:02:5d:9e:27:7f:63:1c:d0:62:7d:1d:        18:bd:ab:4a:1d:5d:c1:d3:cd:34:f4:25:35:27:e7:11:0d:72:        c4:57:af:47:b6:91:74:2a:93:6d:35:d4:de:6c:62:f1:86:92:        b1:c1The above output can also be seen if you direct your browser client to your website and check the certificate sent by the server to your browser. You will be able to lookup all the details including the validity dates, subject common name and the public key modulus.Capture an SSL connection using WiresharkAnd as you would have expected, looking at the low-level tcp data that has been exchanged between the client and server with a tcp-diagnostics tool (i.e. wireshark/tcpdump) you can also see the modulus in there.These were the settings I used to capture all traffic on the local loopback interface, matching the filter expression: tcp and ip and host 127.0.0.1 and port 443. This tells Wireshark to leave out any other information, I may not have been interested in showing you.

    Read the article

  • BPM Business Value Patterns

    - by JuergenKress
    Together with Matthias Ziegler from Accenture we presented the BPM Business Value Patterns at the SOA & BPM Integration Days in Germany in October: BPM Business Value Patterns View more presentations by Jürgen Kress Please visit the website http://soa-bpm-days.de/  for the next SOA & BPM Integration Days III February 29th & March 1st in Munich If you'd like to learn more please feel free to contact us any time: Matthias Ziegler Jürgen Kress For regular information on Oracle SOA Suite become a member of the SOA Partner Community. To register please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: Matthias Ziegler,Jürgen Kress,SOA & BPM Integration Days,BPM,BPM Value Patterns,BPM ROI,Oracle,OPN,Accenture

    Read the article

  • On checking is a port open on the firewall?

    - by [email protected]
    Hi, well sometimes DBAs and sysadmin need to check if a particular port is "open" on the corporate firewall --i.e. *Grid Control* Will the communication between OMS and a management agent work? --One solution well consist on deploying the piece of software in question, start it and just check if everything works fine, however i find more classy trying to get that information beforeThere are several tools for doing so --i.e. nmap *like Trinity on The Matrix*, but just found a nice piece of code for establishing a socket on a parameter passed port.After running the program doing a telnet from the client machine  will be a walk in the park Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} #include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> int main(int argc, char *argv[]) {      int sockfd, newsockfd, portno, clilen;      char buffer[256];      struct sockaddr_in serv_addr, cli_addr;      int n;      if (argc < 2) {          fprintf(stderr,"ERROR: A port must be provided. Aborting ...\n");          return 1;      }      sockfd = socket(AF_INET, SOCK_STREAM, 0);      if (sockfd < 0)          {         fprintf("ERROR: Unable to open socket. Aborting ...\n");         return 1;       }      portno = atoi(argv[1]);      serv_addr.sin_family = AF_INET;      serv_addr.sin_addr.s_addr = INADDR_ANY;      serv_addr.sin_port = htons(portno);      if (bind(sockfd, (struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)          {               fprintf("ERROR: Unable to bind socket. Aborting ...\n");               return 1;       }      listen(sockfd,5);      clilen = sizeof(cli_addr);      newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr,&clilen);      if (newsockfd < 0)          {           fprintf("ERROR: Unable to accept connection. Aborting...\n");           return 1;        }      return 0; }Of course, you can still ask to the network guy if the port is open or notHope it helpsL

    Read the article

  • links for 2010-04-08

    - by Bob Rhubart
    Rittman Mead Consulting: Realtime Data Warehouses Rittman Mead Consulting's Peter Scott with a preview of his Real Time Data Warehousing talk at Collaborate 10. (tags: oracle otn rittmanmead collaborate2010 datawarehousing) Arun Gupta: Java EE 6, GlassFish, NetBeans, Eclipse, OSGi at Über Conf: Jun 14-17, Denver "Über Conf is a conference by No Fluff Just Stuff gang and plans to blow the minds of attendees with over 100 in-depth sessions (90 minutes each) from over 40 world class speakers on the Java platform and pragmatic Agile practices targeted at developers, architects, and technical managers." Arun Gupta (tags: oracle sun javaee glassfish netbeans) Aaron Lazenby: Profit's COLLABORATE 10 Session Selections Profit Magazine editor-in-chief Aaron Lazenby shares his annual list of COLLABORATE 2010 sessions that "reflect some of the more interesting people/trends in enterprise IT." (tags: oracle otn collaborate2010)

    Read the article

  • Presentaciones del Customers Day sobre J.D. Edwards

    - by [email protected]
    Durante el Customers Day sobre J.D. Edwards celebrado el pasado 9 de marzo de 2010, se presentaron los siguientes servicios: E1 Gestión de Mantenimiento Impacto del cambio en los tipos de IVA BI Apps para J.D. Edwards A continuación puede encontrar las presentaciones incrustadas. Presentacion JDE Customers Day 1 E1 Gestion de MantenimientoView more presentations from oracledirect. Presentacion JDE Customers Day 2 Impacto Cambio Tipos IVAView more presentations from oracledirect. Presentacion JDE Customers Day 3 BI Apps para JDEView more presentations from oracledirect.

    Read the article

  • Partner Induction Bootcamp - Technology Guided Learning Path

    - by Paulo Folgado
    Partner Induction Bootcamp - TechnologyGuided Learning Path Em suporte do nosso objectivo de promover a auto-suficiência dos nossos parceiros, temos o prazer de anunciar o lançamento do novo plano de formação: EMEA Partner Induction Bootcamp Technology. Este plano de formação (Guided Learning Path) cobre não só uma introdução ao "stack" tecnológico Oracle, mas também às Técnicas de Vendas e Processos de Negócio, visando aumentar a capacidade das equipas de Vendas dos Parceiros na identificação de oportunidades de negócio e consequentemente incrementar o seu negócio com a Oracle. Este Plano de Formação contempla 2 níveis: Nível 1 - Awareness: 17 sessões diferentes de eLearning pré-gravadas cobrindo todo o "stack" tecnológicoOracle. Estão organizadas em 3 grandes módulos: Base de Dados e Opções, Fusion Middleware e BI. No final de cada módulo, existe uma prova de avaliação. Nível 2 - Proficiency: Uma formação de 2 dias em sala de aula para melhorar e praticar as técnicas de gestão de oportunidades de negócio. Estas formações estão disponíveis apenas aos membros registados no OPN que trabalham com Tecnologia Oracle. Para mais informação sobre o the EMEA Partner Induction Bootcamp Technology, clique aqui.

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >